Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tent v0.1 (tent.io)
293 points by sgwil on Sept 21, 2012 | hide | past | favorite | 194 comments


Decentralizing things like this is never ever going to work. In terms of running it... my mom/sister/dad/cousin are not going to host this themselves. They might depend on me to do it, but even I'm not going to run a decentralized social network myself. There's too much liability involved. Not to mention, it's a pain in the ass.

I want to hack and work on my own ideas. That's why I trust my mail hosting to Google. Sure, they read all my emails and analyze me for marketing. That's a small price to pay though. I get a rad web client, Apple Mail + iPhone support, and never need to worry about my email server. Anyone remember qmailrocks.org? I used to sit for hours messing with and playing with qMail back in the day. I've done it. I've hosteed mail servers for friends and family. What a pain in the fucking ass!

Same for hosting. I love configuring infrastructure. I taught myself HAProxy, load balancing, nginx, and clustering years ago when people were still using mod_python. I built a FreeBSD box for fun with old parts an an old SSD the other weekend. I have quite a few linodes running right now. But it's also a pain in the ass. Because of that, I also enjoy pushing my personal site to Heroku and not worrying about it ever again. git push, boom done.

A lot of people take things like Facebook for granted. We all bitch and moan about all kinds of issues it has or features it lacks, but it's a gargantuous network that virtually everyone that I know or care to talk to is on it. My family and friends back home in LA where I grew up, my new friends in Sweden that I met while abroad recently. It's got a handy mobile app (that was luckily redeveloped recently) and best of all it's 100% free.

This decentralized stuff is nonsense. There's too many layers of crap. Even if there was a one-click-deploy social network for people where all they needed to do was rent a server somewhere... someone is still running that server and you need to worry about them snooping your data or whatever else the neckbeards are worried about.

Remember Diaspora? Hah.


You say decentralized services never work, and then give an example of the most successful one: email.

I don't know why you are assuming that everyone will have to run their own.

Also, it seems very cynical/naive to think that just because such systems aren't currently working/won't currently work with given social structures, that this will always be the case. Do you disagree?


Not only is email a successful decentralised system, it's the logical infrastructure for a decentralised social network. In Facebook terms, your "feed" is just an inbox with a smart filter and nice presentation, and your "wall" is just a mailing list to which all of your friends are subscribed.


This is so profoundly true. It's all just feeds. RSS could have been implemented as mailing lists, too. Maybe we should use mailing lists to build the next big decentralized social network instead of this infrastructure.


I'll just leave this here http://mobisocial.stanford.edu/papers/hotpets11.pdf First sentence of the abstract: "This paper proposes Mr. Privacy, a social application framework built on top of email..."


Wait, wasn't RSS invented to replace mailing lists (aka news letters)?

But yeah, making a facebook/twitter-style social network app that uses plain email as transport would be really cool.


> Wait, wasn't RSS invented to replace mailing lists (aka news letters)?

Probably. On the server-side RSS is nicer in some respects - you're not sending out mail to old, disused inboxes, you don't have to manage subscriptions or worry about being blocked as spam etc.

One thing that I think might make email better is if people got used to the idea of "email apps". The idea that my inbox might collate stories from a given source into a scannable digest (like RSS might give me) seems like a strange departure from the traditional idea of what an email client or service is "meant to do" - present discrete, unmodified messages in the most transparent way possible.

For a social network it's clear that you'd need something like that, though - you'd need clever filters on your feed, thumbnails on images, possible integration with image-sharing services etc etc. Using email and mailing lists gives you a big head-start, but if you can't build on top of it then it can't take off like a dedicated protocol can.


XMPP has this neat feature that allows you to know what features are supported by a remote client, allowing you to just send social stuff to online clients that want it. Email doesn't have this, but it could be approximated by some clever use of multipart messages. It'd be even better with filters, but you can't count on things like sieve being available.


Email essentially IS centralized at DNS. Any decentralized social network has to solve the lookup problem: how do I find people. None really do this in a way that's going to make it fundamentally different than what we have now.

I can't help feeling that the natural outcome of decentralized social networking is that people eventually start hosting them for mom and dad, then a few friends, then start companies, then we are back to facebook. Someone, somewhere is going to have to have a database of names mapped to servers.

Centralization is, in an abstract way, present in so many of our social structures, that yes, I disagree that within a capitalist economic structure (one which is optimized for sharing of services rather than individualism, ie: non-technical users simply wont do this unless the work is done for them, like facebook did) it would be next to impossible for something like this to work.

People quite simply don't want to deal with this shit, and centralization brings efficiency and management amongst many other things.


Email is a decentralized service and the task of discovery is solved with DNS. If you have an address, jackalope@example.com, you can find out where to send the message if it is explicitly set (MX record) or not (A record). I don't see why you can't do the same with a decentralized social network.


And who runs the DNS servers?


The most popular ones are run by IANA, ICANN, and Verisign.

But nothing stops most of us in the free world from choosing an alternative root system. Other than the support cost of a minor config change in our OS or application software.


Or mymail@[127.0.0.1]


Email essentially IS centralized at DNS.

The difference between DNS and Facebook/Twitter is that DNS is regulated so that bad guys have to go through some kind of due process to shut down your domain. It's not perfect, but it's better than what else we've got.


AKAIK it is still possible with most popular email servers to send and recive using only IP addresses (e.g. by placing the address in brackets).

DNS is really only optional for email, at least functionally. But I think a lot of people assume there can be no email without domain names.


So why not regulate facebook and twitter instead of make another service that will essentially turn into DNS, which we already have?


Facebook and Twitter were funded under the assumption that they would be unregulated monopolies, so regulating them amounts to stealing from their investors.

Instead of another service that will essentially turn into DNS, I would propose just using DNS for naming. So if you're Tyler Durden you register tyler.durden.name and put an A record pointing to your DiSo server. Now you can switch providers while retaining your identity by just changing the A record.


Exactly, no matter how decentralized is the infrastructure, there always has a centralized node, such as BitTorrent tracker.


BitTorrent is capable of being decentralized from day one: http://en.wikipedia.org/wiki/BitTorrent_tracker#Trackerless_...


    The original BitTorrent client was the first to offer decentralized [...]
Is not the same as "from day one"


Why it wasn't be?


Exactly right. Very few people want the hassle of running their own email server. It happens I am the kind of person that likes running his own email server. I run my mail server in such a way that it is appealing to many of my friends to host their email accounts with me. Many other people find their accounts with gmail, or their ISP or their business. These are ALL examples of the success of our decentralized email system. Individuals, small businesses, and large business can all enter the market, offer advantageous features, compete on price, etc.

Facebook is the opposite of this. I have no option to build my own site for facebook style networking. I have no option to subscribe to Google's implementation of a general social networking protocol.


> That's why I trust my mail hosting to Google.

Imagine if GMail only allowed you to send Google Mail(TM) to other @gmail.com addresses. Email's decentralization allows us to ignore the fact that other people may be reading on an iphone, yahoo mail, or emacs.

Decentralization isn't (only) for neckbeards. It's about giving users freedom to socialize with those who choose a different client. Isn't it bizarre that I use LinkedIn, you use Facebook, therefore we can't be friends?

First we build the decentralized social network, then I will decide if I want to social network with you via google, yahoo, or emacs.


ISPs hosted email servers for individuals for years. I don't know why hosting a social networking server for Customers would be any different. I don't know enough about the tent.io API to decide how I feel about it, but I think that history with SMTP, NNTP, etc, shows that decentralized protocols with servers hosted by a diversity of third-parties (instead of a single Facebook, Google+, etc) can work.


We are launching a hosted version very soon, and expect others to as well.


I would think ISPs would love to host folk's social network servers. Just think of all the social data they'd have at their disposal for whatever they could use it for.


I can't understand how you say that decentralized systems can never work, then point to e-mail as an example of doing it right. E-mail is the classic decentralized communication system!


Except, as GP points out, the vast expansion of the email system was followed by a huge number of users running to GMail and a huge number of domains running to Google Apps and other hosted platforms. My parents don't run mail servers. My employer no longer runs a mail server (outsourced to a common third-party provider), and I used to run a mail server but I gave up on it and fell back on Google Apps when I needed one. Running a mail server is a huge pain and once centralized systems became available (Hotmail, then Gmail and Google Apps) most users fell back on them.


Even a situation where 99% of people use a handful of providers is a lot more free than everyone being forced to use a single provider.


This. And if you put just a tiny bit of effort into it and own your own domain you can switch providers at any time for little effort. A world of difference from the Facebook trap.


"Decentralized" don't mean outsourced vs self-hosted. It just means there isn't a central/core part of a network that everything relies on. Outsourcing of decentralized systems is very popular (see: email, web hosting, DNS, etc.)


Yes. I think most people miss this. For peer-to-peer to succeed I think there needs to be some amount of "re-education". There needs to be a shift in thinking. Many of these things that others are doing for you (email, etc.) can just as easily run from your machine instead of theirs. You do not need the same capacity and ability to scale as they do.

What makes it all possible is NAT traversal. That is the fundamental problem being solved which takes us back to peer-to-peer and the orginal model of the internet. The rest is all just pre-configuration and UI.


I think NAT traversal is actually only one part of P2P becoming successful. Between UDP hole punching, uPNP, and other techniques you can cover a pretty large percentage of users.

The harder part, to me, is distributed trust in the discovery phase. Without a central authority it's really hard to ensure authenticity, fight spam, etc.


I really don't think it's as big an issue as people make it.

People commenting about this in lists and forums and blogs make the issue more complex than it is because they have all sorts of assumptions about how it has to work. But as Rob Pike says when you assume you put plum paste on your ass. "Discovery phase." This is nerd stuff.

Put all the nerd stuff aside for a moment. When you want to telephone someone, how to you handle the "discovery phase"? You look up the telephone number and place the call. Everyone does this the world over. Second, how many people do you actually telephone in your entire lifetime? Hundreds? Thousands? (Are you a telemarketer?) Third, how many of them are people you see face-to-face at some point? People do not normally call lots of anonymous people.

A telephone network. (Actually a system of regional networks.) Everyone has a number. You get their number and you make the call. But how? According to nerds on the web, this should be a complex problem with no easy solution. But millions of people, non-technical people, make telephone calls everyday.

To think the web equivalent of a telephone directory is an unsolvable problem is ridiculous. And to think that every person needs a directory with billions of names is similarly silly. They will only need a small subset.

Moreover, with modern storage capacities, you could store a billion names and numbers on a small device no problem. Searching billion row databases is quite fast, if you use the right software. But you will never even need to go that far.

AFAIK people do not change their telephone numbers very frequently. Certainly not on a whim. If they do change it, then they have to let others know. Somehow they manage this., even if they are non-tecnical. Of course if you keep changing your number you'll be harder to contact. This ic common sense. But why would you do that? Why would you need to keep changing your number? Get a number and stick to it.

If you think stuff like MobileIP is the solution to roaming then you need to keep looking at what's possible. There are simpler and more reliable ways to do it.

How many people do you call in a lifetime? Think about it.


I meant "discovery phase" in the broader sense of locating people and content around you, making sure that they are who they say they are, and that they have the content that you want. Less about keeping track of where to reach someone and more about trust.

If you're thinking 'p2p' as skype is 'p2p', then fine, a directory service is easy to build. You just set up a centralized service to handle lookup, collision disputes, spam fighting, etc. Then when the centralized registry decides it wants everyone to start using their real names, we're back to where we started.

Lastly, you're right that the average person only calls a handful of people. But this is not the 1970s phone network, this is the internet. It's a different communication medium with different rules. Your HN account was created one day ago and has no contact information. The chances I could call you up on the phone are basically zero, yet somehow we're having a conversation.


> Then when the centralized registry decides it wants everyone to start using their real names, we're back to where we started.

Not really. Directory lookup should use a standardized API so that when one directory service goes rouge, clients can just switch to another they trust.

I do think that it's essential for this to be standardized. Not only for this reason, but clients/hosts shouldn't be forced with the responsibility of indexing the entire social web themselves.


If it's standardized with multiple providers then either a) you can't communicate with people who have switched to a different directory service or b) it's no longer centralized. :)


"This is not ... this is the internet."

That is exactly the type of conceptual roadblock I was alluding to in my comment.

But if you look at what people, not just nerds, want to do over the internet a significant portion (perhaps even the majority) of it is the same old stuff they did in 1970's. Send person-to-person messages and make person-to-person telephone calls.

The new dimension is of course person-to-person video, and we now have more ubiquitous bandwidth to make this happen that we didn't have in the 1990's.

Whether you see possibilities or just impediments depends on how you define the problem.

If you define the problem as achieving some sort of perfectly anonymous communication, an anonyous interchange of bits, then you are outside of the problem I'm discussing. It may be a fascinating problem, but it is not one that interests me.

I'm talking about doing those same old things we were doing in the 1970's, with the same degree of privacy we had (absence of advertisements and senseless statistical analysis of every word we say for commercial purposes), using the internet.

Millions of people manage to call their friends and family despite the issues of "discovery phase" and "trust". Millions of people have a connetion to a telephone network despite the potential of "spam" (cold calling? telemarketing?)

Again, if you redefine the problem your own way++ you can argue endlessly against any sort of simple proposal, e.g. one that aims to make something we've all been doing since the 1970's easy, for everyone, not just nerds: connecting, directly, to people you know.+++ Facebook seems to revolve around the idea of communicating with people ("friends") you know in real life, not online acquaintences you have never met. Is that the reason it's popular? I don't know. And I don't really care. Even if there was no such thing as a "social network" (buzzword), I see a utility in being able to stay in contact with people I know in real life. I fail to see Facebook, which is roughly the real world equivalent of a megaphone, as being necessary to do that. Why do I have to route all by bits through Facebook's website? The answer is: I don't. And NAT traversal is the solution.

++ We need a way to have anonymous communication with strangers. As other commenters have said, that problem is for the cypherpunks.

+++ And I'm pretty sure this is how the very early internet was, before firewall mania and NAT. People had met the others they were connecting to in real life. Or, at least, they had their real life contact information and could pick up the phone and call them, or send them a letter.


IPv6 is going to solve that.

RetroShare is afaik completely decentralized private social network. Although it's security design isn't as good as I would wish it to be. Good network uses PKI and all data being transferred or at rest is secured. This also means that service providers can't be held responsible.

Many decentralized plans allow still operators to soop in network parts which they're operating. If there's separate client, why not to make sure that system is secure?


I don't know a lot about RetroShare but from what little I know I think they actually had the right approach. It was real peer-to-peer.

I'm not thinking about peer-to-peer from an anonymous file sharing perspective. I'm thinking in much broader terms. But I know anonymous file sharing is what many projects have focused on.

You are right that many puported "peer-to-peer" solutions are not true p2p. Look hard and you'll usually find a middleman somewhere in the scheme. Of course as we all know many users are not really interested in how something works, just so long as it works. How many people really look hard at the scheme being used?

As for IPv6, I think it brings a lot of complexity. And where there is complexity, there will be problems. Working with IPv4 is complex enough as it is. I like IPv4 because it's more widely found. Not everyone will have access to IPv6.


So what? E-mail is still decentralized. That most people don't host it themselves is a completely unrelated concept.


Gmail does not have as large of a market share as you think: http://www.campaignmonitor.com/resources/will-it-work/email-...

In general this is a common misconception at HN and in similar circles: Gmail may be ubiquitous among the tech crowd, but it's far from clear that it's even in first place among webmail providers.


Do you honestly believe that "most users" use GMail and Hotmail? Most users use an ISP, but lots and lots of companies host their own mail, sometimes for good legal reasons. It would be interesting to know if Microsoft sells more Exchange licenses to hosting companies, but I don't think so.


I cannot run my own mail server without significant extra expense, because the assholes who run Spamhaus decide every Comcast IP must be blocked. So over the last 10 years or so, it's become much harder to keep email decentralized--you either need to use an existing service, or pay out the nose for a "business-class" connection which hopefully won't be blocked.


Decentralized is not the same as self-hosted.


I just started to (self)host my email servers two weeks ago. I'm very happy with it.


I'd be very interested to hear about your experiences--what kind of net connection you have, static or dynamic IP, what software you're running, etc.


I disagree. The software will get easier over time until it is literally plug-and-play. But it won't happen without projects like this happening.


This. It's easy to assume that software will be hard to configure forever. We just have to not let that happen. Lots of people manage to set up their routers, right? How much more complicated does this need to be?


Lots of people manage to set up their routers, right?

Because they have to. Don't set up a router? Don't get an internet connection. It's like how people manage to put their tax in.

The barrier for entry needs to be much, much lower than setting up a router, otherwise it'll just be passed over in favour of something else, like Facebook.

(This, though, like the internet, changes when you reach critical mass. Then barrier for entry can be higher without impacting uptake much.)


Exactly. iOS and Android are based on Unix but they're easy to use. There's nothing stopping the same thing from being done to servers.


> There's nothing stopping the same thing from being done to servers.

Wordpress.


Wordpress takes a simple thing (One-way flow of text from author to reader) and makes it easy. It could be a lot harder to deal with something where information was going both ways.

A very important part of an interactive webpage is how it changes over time and with different inputs. Web developers have no trouble looking at some example widgets and imagining all the possible things a user could make them look like, but most people (might) have trouble doing that.

The same thing, I imagine, applies to running servers. I only know a bit about operating servers, but from my experience there is a complete, unremovable complexity that comes with having a powerful set of tools. Perhaps, we could remove almost all the configurability from the instant-server kit, but then why would you want to run your own?


Perhaps, we could remove almost all the configurability from the instant-server kit, but then why would you want to run your own?

Freedom. e.g. being allowed to use a pseudonym or post breastfeeding photos.


So decentralized email (you know, "email") doesn't work?

Your mom/sister/dad/cousin are going to use a service that allows them to post status updates, instead of just using Twitter. Just like your mom/sister/dad/cousin use Gmail for email, and not their own server.


I'd say "my mom/sister/dad/cousin" is not going to host this themselves is a tricky assumption. I can imagine a world where my mom creates an account with a service that kicks off an EC2 instance on her behalf without her even knowing it. Want to build a facebook that scales cheaply? Make every user bring their own server. On top of that, make "bringing a server" as easy as a registration process.


Though I believe the idea is that Tent be more like an email server: email hosts are able to offer their own features and interfaces, yet remain interoperable with one another.

Most people don't host their own email service, but most people do benefit from using email – and the best part of it is that there's no monolithic Email Inc. that could someday go bankrupt and bring everyone's emailing days to a close.


I would be willing to pay for a hosted version of a decentralized network that respect my privacy. Decentralized is important to me not because I want to run my server, but because this mean I can switch providers or choose to run my server if I want.

Plus, it's really just an advanced version of email. I mean, you could build a fancy social network on top of SMTP. And SMTP is decentralized, so I don't see any reason why this shouldn't work.

Regarding Diaspora, it seems that tent.io is starting with the protocol instead of the frontend, which seems more promising to me.


Remember Skype? It's very much peer-to-peer, with various node types, third-party traffic forwarding, encryption end to end, QoS issues to manage, NATs and firewalls to handle, etc.

Did you try to install a recent Skype version? Download an executable, run, enter your login and password, and it just works.

Execution matters, you know.


The existing decentralized software out there (except for maybe bitcoin which is actually quite simple and a couple others) are not sufficient for normal people because of the way they are presented. Diaspora failed horribly because it was hard to use on your own system. That and it was rather scrambled at launch. The code was horrible.

If there was a system that acts like current centralized systems out there it wouldn't be a matter of it being hard. The technology is becoming easier and easier.

If presented properly, there is no reason a decentralized system couldn't attract normal people.

Facebook and Google make it appear that what you are using is actually on your system. That you control the information. Yet, the so called "small" price of that is you have to give away personal information to them.

With a system that is completely decentralized and secure and keeps people's privacy while being so easy to use that my Mother can use it w/out problem, centralized systems would be in danger.

So yes, the current way of doing things is for geeks. That will change with projects like Tent. Centralized services will ultimately fail simply because of the violations to peoples privacy are starting to really piss people off. Normal people. Like me.

If I want to share my music playlist with my friends, I should be able to do it. I also don't like being censored and watched, which is exactly what these systems do. Don't believe me? Search it. And if you don't mind that type of thing for convenience, then I'm not sure you understand freedom.

And by the way, those "neckbeards" are the ones who are going to set things straight. A decentralized system doesn't necessarily have "servers". Everyone will be their own type of server.


my mom/sister/dad/cousin are not going to host this themselves. They might depend on me to do it, but even I'm not going to run a decentralized social network myself. There's too much liability involved. Not to mention, it's a pain in the ass.

That's not the point. Those aren't the ones decentralization benefits. Decentralization will benefit other organizations, such government agencies, companies, non-profits, NGOs, etc. These organizations most likely have other services running on either their own infrastructure or they have some one else paid to host it. They benefit by retaining control of the namespace and ability to post a message without relying on a centralized service.


I think that an open protocol for social networking could really take off. Different hosting companies could offer different services for the same underlying protocol. Some hosts may offer free hosting with advertisements, while others may offer the service without adds for a monthly rate, etc. Ideally, if people were unhappy with a hosting company, they could migrate their data to a different host. I like the comparison of a protocol like this to a protocol like email. I really believe that social networking should be a protocol and not an exclusive service owned by other companies.

Diaspora has some similarities with Tent, but there are a lot of differences between Tent and Diaspora.


Many people thought the same about live journal and Blogger in the early 2000's, then Wordpress came along. This (and diaspora) are Wordpress for social network profiles.


Decentralized protocols seem to work well for things like email and web sites though. I think the idea is that most people will use a provider to run the server, but it is possible/easy to switch between providers. The lack of lock-in and network effects at the provider specific level mean that competition can do its thing and people won't have to put up with shenanigans like retroactive privacy changes a-la Facebook.


Exactly.

There is a reason we don't dispose of our own garbage. It's just worth paying some one else (plus efficiencies of scale). Infrastructure like social network is garbage. People don't wanna deal with it, don't care how it works, and are glad to externalize the cost. They just want their waste to magically disappear. Or, to maintain / increase their social status with friends and family.


And there's also IRC as an example of a decentralized service that is still alive.


Are you high?


This might be (just a little) too pessimistic.

The sweet spot both Tent and Diaspora are/were aiming for is one where most users use somebody else's service, but a few crazy neckbeards choose to self-host.

The neckbeards who actually care (often quite unreasonably) about privacy, etc, provide a very useful service to the ordinary users - they keep the central service(s) honest, ie, keep them from turning into Facebook. Because it's possible to escape, because the neckbeards keep that possibility open, the ordinary users are not captured.

At least, that's the theory. Diaspora didn't execute on it. Tent, well, we'll see.


I agree. This is like Usenet, or further back the old Fidonet days where most people accessed through a small local hub, but you still had the nutty people who ran their own node.

Make it easy for people to participate casually, but also make sure that people can customize to their liking and exert full control over their own stuff if they wish to, without causing trouble to the rest of the network.


No, nodes were usually BBS systems. If you wanted to use same protocol, then you did run your own point usually. Of course there were a few sysops with their own nodes. But Point was the usual way of personally joining the network using official protocols. Of course most did use qwk and bluewave etc to get messages quickly without setting up systems without mandatory zone mail hour etc. Didn't you run your own point / node? ;)


You're right, I got my terminology mixed up (it's been a while heh).. Point is what I meant.. Thanks. :-)


SPOT ON CHAP.

This is truly, exactly it. I don't expect my mother or friend to run a Tent server. But imagine Facebook were a Tent/Diaspora host. People would have left in DROVES by now if another, somewhat equal host could migrate their data in and provide better user respect.


Tentd seems to be based on Ruby and Rack. Since it's based on Rack, I'm assuming then that it operates over HTTP. Which leads me to my current frustration...

Why does everything have to be built on top of HTTP? Sure, I know you're going to say, because it works, and works well that's why.

But think about it. The reason why we have all these centralized services in large part is because of HTTP. HTTP has made it really easy for someone to build a centralized service, or application, like Facebook or Twitter. Before the days of HTTP, if I wanted to build a service, I probably would invent a protocol like Telnet, Gopher, SMTP, IRC etc., which is often naturally decentralized, instead of programming a web application.

Perhaps if we want more decentralized services, we should focus on frameworks that make it easier to construct TCP/UDP based protocols.

Just a thought.


Very few people will be able to use non-HTTP protocols from their company networks. A significant number of home users behind NAT won't be able to use them without configuration steps that normal users simply won't make.

To that add the fact that there's actually minimal benefit for most protocols to using something other than HTTP. Obviously there are protocols that are much better run over something other than HTTP, but most protocols, probably including Tent, aren't like that.

http://cr.yp.to/sarcasm/modest-proposal.txt


Corporate networks are followers when it comes to social software. If a protocol becomes very popular among users eventually corporations will give in and make it available. Companies used to block HTTP also.


That has absolutely not been my experience in the last 8-9 years of doing onsite engagements for corporations. You can usually count on HTTPS getting through unmolested. Usually.


I meant back in the 90s. I worked at a couple of companies that saw no reason to give their users access to the internet which was seen as a time waster. I haven't come across that attitude in a very long time, even though there's probably more time wasted now than ever.


Sorry, message board fail. I was saying that you're lucky if you can get web traffic through corporate firewalls unmolested. You can only rarely get anything else out, and virtually never get anything in.


Normal users (ie, users behind NAT) shouldn't be running their own Tent servers, period. People who run their own servers are geeks by definition and can/should host in the cloud. Where "cloud" means "that part of the Internets that actually works."

But your second point is unanswerable. I'd go a little farther and say HTTP is actually a pretty good fit for Tent.


> Normal users shouldn't be running their own Tent servers

I used to agree with you that the general public shouldn't run their own server, but that argument may no longer apply.

For example, my home PC (a mac) wakes up from sleep whenever I remotely need something off of it (took no setup on my part). The machine is always backed up, and already has all of my photos and address book. It's been just as reliable as my paid shared web host. So, why not start deploying services from our home PCs.

Think of the obvious benefit—I no longer have to upload photos. Given some additional thought I bet we could come up with other benefits to hosting from home.

I'm not quite ready to advocate for this, but maybe it's time to reconsider this assumption.


Your home PC isn't really on the Internet. It's on a crappy pseudo-network that happens to be connected to the Internet.

This doesn't mean you shouldn't be able to use it as a server - you should. However, to use it as a server, you'll need a proxy/gateway server on the real Internet (ie, in the cloud) and a pseudo-protocol by which the gateway talks to your home PC. The pseudo-protocol could even look a lot like the protocol that real servers speak to each other on the real Internet - but there's no requirement that they be the same. (I'm not sure how well inbound port 80 works if you're on Comcast, anyway.) So, when designing the real-server protocol, it's a layering violation to also be thinking about this pseudopod.


There is no such thing as a layering violation. You should be immediately suspicious of anyone who claims that there are such things.


I've got it - we'll encode Ethernet frames in JSON and tunnel them over HTTPS to an IP stack in Javascript. With a JS DHCP client, your browser will grab an IP address on the same subnet as the cloud server. And it's business up front, party in the back...

"Layering violation" is not the best term for what I meant, which is just "design mistake." Also the OSI model doesn't have much to do with reality. But if you want layering violations...


Let me see if I understand the argument you've made: the idea that there are no layering violations is silly because people can make bad designs? Should we blame poor layering discipline for all the C code out there that calls gets(3)?

Modularity is a good thing. Composability often is too. For systems deployed in the large, the end-to-end argument also guides designs to simple cores and complex clients.

But none of that means that any given layer "owns" any piece of functionality. It would indeed make a whole lot of sense to start thinking of IP as the new Ethernet, for instance. But we can't really do that, because the greybeard priesthood uses concepts like "layering violation" to shut down discussion and maintain intellectual turf.


I don't think we're arguing about anything substantive. For the record, I shave my neck every day. I also have no problem with thinking of IP as the new Ethernet.

By "layering violation" I really meant "layering" in a systems rather than a protocol sense. For example, it'd be a "layering violation" in this sense if your OS had a system call print_powerpoint_document(2).


Modularity, compositions, and reusable/flexible kernels are strong design components. "Layers" are a straitjacket. I'm automatically suspicious of designers who invoke them, because they tend to have more to do with shutting down discussions than they do with designing. That's all.


Please read definition of "inter". The Internet is a collection of crappy networks.


The gateway between your crappy network and the Internet is on the Internet. A node on your crappy network, however, is on your crappy network.

(What's unfortunate is that IPv6 seems destined to become no more than one of these "crappy networks.")


Your home PC isn't really on the Internet. It's on a crappy pseudo-network that happens to be connected to the Internet.

The hell are you talking about? I know I have a public IP address and am running several wikis, a Minecraft server and a Mumble server from my home, accesible via Internet (behind a firewall).

Next up is a SIP/XMPP server. Or maybe tent?


s/Your home/Your average home/


Including my dorm room (it was port restricted), my college apartment, my father's house, and my current home, I've always had a publicly accessible IP address.


Um, What? HTTP is a transport. And last I checked, the web is the worlds largest distributed application... connected by... HTTP! On top of that, virtually every language has excellent HTTP libraries. There is little and less to gain by using something new or unique at this layer of the application. Using HTTP jump starts the dialogue to the next level by not reinventing the wheel and being mired in endless technical protocol discussion that does not forward the functionality of the platform.


Yes, all valid points. And yes, particularly as a transport protocol it would, like other protocols lead to decentralized services. But perhaps I should clarify a bit and say that, HTML and the web browser have led the Internet into a place with lots of centralized services.

If we see HTTP as a transport protocol like UDP or TCP, then yes you're very right. But if we look at HTML and web browsers becoming denominate applications of the Internet, then I believe that's how a lot of this centralization happened.

If you look at the web as a series of servers, sure it's decentralized. But contained on those servers are a lot of centralized services, i.e. Facebook, YouTube, etc.


You should check out buddycloud.org. I'm not in any way connected to it, but I saw it mentioned on HN a couple months back. It is based on XMPP for the backend, but the documentation indicates a webclient, as well as mobile clients. Their documentation seems quite extensive and it was part of Google Summer of Code this past summer.

I'd be interested in hearing other people's take unless it involves "XMPP is too complicated."


I don't follow how HTML leads to centralized services. Explanation?


> Why does everything have to be built on top of HTTP?

Port 80 is already open.


All ports should be open between servers. (Edit: Ports should be open if you want. I'm not saying you should run without a firewall, but on a server the firewall is under your control, not your ISP's.)

A few of the DiSo protocols were based on XMPP instead of HTTP.


That's not true either. The best practices for running services on EC2, for instance, don't leave anything but port 80 open by default. That's a good thing, because if the best practices were anything else, people would routinely lose their databases to people guessing "admin/mysql" over the Internet.


> All ports should be open between servers.

No thanks.


How about an "open" port with nothing listening on it? What does "open" mean?

I think his point is that two nodes should be able to open and then connect on any port they choose, not just TCP 80. Please correct me if I'm wrong.

Are we saying that a user should not even be _allowed_ to have an application listen on any port except a known few?

Isn't the more important issue whether an application/process is actually listening on a port? I mean you can send me all the nasty bits you want but unless I'm prepared to receive them, what difference does it make? And I thought the whole premise here is that the only people on this social network are ones you know. They shouldn't be deliberately sending you nasty bits anyway.

I am a unixnoob. Please go easy on me for my ignorance.


I think it is true that if nothing is listening on the port, then the packets will be dropped by the OS. However, you may have services you are unaware of running, or malicious software which has snuck in, or a running application which would be listening on the port if it was open. Perhaps you want to run a webserver on localhost for testing, and by keeping the ports closed you block remote access. For these reasons it is generally regarded as bad practice to leave all ports on a server open to the public.


Maybe your servers, not mine.


Maybe this is another part of the problem. Limiting ourselves to port 80 limits the types of services we can create. But on the other hand, if we really wanted to run a service, then we would just open up the port.

But yes, NAT does complicate things, particularly for home users, but I guess that's a limitation of the universe.


Good luck with that and fanatical ops guys.


This. So much this.

The notion that everything has to be HTTP annoys me too.


Done. But still not ready for the "do everything in a graphical web browser" crowd. And that's what tent appears to be aiming at. People who think all connection must be made through a web browser.



I find the copy on the website to be rather vague or off-topic.

From the homepage:

  Tent is open, decentralized, and built for the future. Tent changes everything.

  Tent allows every user to run their own server, but like email and the web, most
  users will use a hosting service to handle it.
and the blog:

  The documentation for Tent version 0.1 is now available along with a reference
  server, tentd.
From halfway down the homepage:

  What is Tent?
  Tent is a protocol for open, decentralized social networking.
I understand that the creators are busy building the product, but some marketing can really help create an excited community.

Is this project like Diaspora? How is this different from the competition. I'm excited to hear how this is unique and to see where the project goes.


The "Lifecycle of a post" section on the homepage is a pretty straightforward description of how it works. But I would also be interested in comparisons not only to Diaspora, but also to Statusnet.


There is little to compare. Tent is a protocol. Applications need to be built to use it. Diaspora is an application with an unspecced, intertwined federation protocol.


And Statusnet? And other distributed federated social networking protocols?


statusnet is software, not a protocol. the protocol is ostatus.


So you're saying they need to pitch Tent?


I'm gonna get voted into oblivion but that one left me grinning. Thanks.


How do we handle errors?

(I posted this as a reply to someone else but it deserves its own comment.)

The docs don't include any specs for handling push errors. If I push a new status and one of my followers' servers is down what do I do? Do I retry later? Do I continue to try pushing future updates? At what point do I stop trying to send updates to their missing/broken server?

If my server goes down how do I handle missed updates? Do I need to query everyone I follow? Can I fetch statues since the last one I received like Twitter? The docs say "There are a number of parameters available to limit the scope of the request" for GET /posts but don't specify what the parameters are.

Error handing is crucial for a decentralized network like this, especially one that pushes to other servers. Without a method for handling servers that go down and other errors it will absolutely not work.


These guys might just want to wrap something like pubsubhubbub[0]. take a look at superfeedr[1].

[0] http://pubsubhubbub.googlecode.com/svn/trunk/pubsubhubbub-co...

[1]http://superfeedr.com/documentation


"Developers: It's time to start writing apps for Tent and adding Tent support to existing projects based on the current specification. "

It's time for you to show me why I should invest my time in writing apps for your platform/protocol.


I found it interesting that this announcement said nothing about what Tent actually is. Without reading other pages on the site, all I could tell was that Tent involves "protocol, apps, and servers" in some way.


Yeah id have linked tent.io as well. clicked there and it's explained after a few buzzwords.


If you hit the #1 spot on HN, you are going to get a bunch of traffic from people you never got traffic from before.

I recommend putting a link to the tent.io homepage that says "new to Tent? Click here!"


What problem is this solving? I know users of social media who are unhappy with what they're using. But I've never heard my mom or any other Facebook user for that matter say: "You know what I want? A decentralized social network where I can own my content!"

For this to get traction, it'll need at least one server with a strong user base.


It's about portability isn't it. Removes lock in. You can shop for a host for your social data but retain the data [in part] under your control. You get some leverage. It's commoditisation of social network as a service.


"For this to get traction, it'll need at least one server with a strong user base."

Sooooo...Facebook basically.


Not if you can leave and keep your network of friends.


If 99% of your friends keep their data in Facebook, you're still dependent on Facebook.


why one server? the whole point is that you can communicate seamlessly with people on other servers.


Pretty much everyone who's been banned from Facebook or G+ wants this, although they may not realize it.


I don't see how the tent protocol has any hope of scaling to large numbers of users / servers. The notification traffic for anyone with a significant number of "followers" is going to be significant. I think the view that the tent spec authors are taking is a bit too simplistic if this protocol is intended to be Twitter or Facebook-scale.


Computers and the Internet are pretty fast now. I suspect the real difference will not be performance but cost: if you have a million followers you will have to bear the cost of sending out the notifications (as it should be IMO).


I agree. The problem is this moves from pull to push. Instead of hosting a status update on a server and allowing others to pull from you, which can utilize lots of caching and other optimizations, you're forced to push data to all of your followers on every single update. This adds significantly more overhead.

Since the protocol is new it doesn't include any specs for handling push errors. If I push a new status and one of my followers' servers is down what do I do? Do I retry later? Do I continue to try pushing future updates? At what point do I stop trying to send updates to their missing/broken server?

If my server goes down how do I handle missed updates? Do I need to query everyone I follow? Can I fetch statues since the last one I received like Twitter? The docs say "There are a number of parameters available to limit the scope of the request" for GET /posts but don't specify the parameters.

Error handing is crucial for a decentralized network like this, especially one that pushes to other servers.


IMO the Tent design is going to become obsolete with the arrival of WebRTC. With WebRTC, there is not going to be any reason to have servers running, as long as some of your friends are online. You can do everything by using public key cryptography and store-and-forward. You can authenticate with SRP/zero-knowledge password proofs. You don't need to have your private key with you, you can recover it from a quorum of your friends. Combine this with a public status/"who's online" server - could be any XMPP/web gateway for example - for bootstrapping the initial connection, and you can log in from any library/internet cafe computer. You can use broadcast encryption to address the privacy concerns of store-and-forward designs.

I wrote a series of blog posts about how such a system would work: http://carcaddar.blogspot.ca/search/label/ClearSky


Why would you try to implement this with WebRTC? WebRTC is for real time communication. It's in the name. Social networks don't have to be real time. How would you do persistence in a WebRTC-run social network? E.g. how can I visit your profile when you're offline?


WebRTC is about direct peer-to-peer connections between web browsers. The real-time stuff refers to the video/audio codecs that come as part of the spec.

Persistence is handled by storing-and-forwarding your profile information to your friends. Privacy is ensured by broadcast encryption. If you want people unconnected to your network to find you, you could publish public profile information on a directory server (this could also be the status server that's needed for bootstrapping a connection if you're logging in from a public computer at the library or wherever).


I feel so behind when reading comments like this, so behind that this could have been a sarcastic parody and I wouldn't even notice. Is there anyone who understood this comment fully? is this something any developer should know? I'm fighting the urge to Google every term (or read the blog) but I'll probably won't understand much of it. Can anyone translate this to plain English?


Understanding what modern cryptography techniques can do is worthwhile for every developer, because it enables entirely different kinds of system architectures. A good example is the Tahoe-LAFS distributed filesystem: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ

I included links to definitions and further information about all the terms used in that series of blog posts. I'm definitely not an expert on cryptography but I'll be happy to try to answer any questions.


P2P (especially the cypherpunk wet dream that you're talking about) is at least 10x more complex than S2S architectures like Tent, which themselves are more complex than client-server aka Web 2.0. Also, server hosting has known business models while cypherpunks don't.


The proposal I put forward is far simpler than the kind of engineering that goes into making Facebook work. You're confusing novelty with complexity.

There's nothing complex about taking some data, encrypting it, and sending it to another computer. P2P/F2F public-key cryptography networks have been around for about a decade (see Justin Frankel's WASTE http://waste.sourceforge.net/). Putting this together with some advancements into an unhosted browser app is not a huge leap. The problem isn't going to be the technical complexity, it is going to be the UI and slow upload speeds.

The business model for this kind of network is going to be the same as the business model for BitTorrent - advertising on public directory web sites.


Can we separate WebRTC functionality from a complex, bloated, insecure and unreliable web browser? (Assuming we ever get straddled with such a thing.)

Can it be captured in a small, simple application?


If you don't care about the web browser, there is absolutely no need for WebRTC and everything is much simpler. Justin Frankel's WASTE (http://waste.sourceforge.net/) did everything you could want in a P2P social network (messaging, chat, file sharing) in 2003 (it obviously lacked a Facebook-like front end). The only thing it didn't have that my proposal does is broadcast encryption for "unfriending" people.

WebRTC is cool because it will be in web browsers everywhere, and you'll have the "log in from the library computer" ability even for P2P networks, as long as there is a status web server you can bootstrap off of.


I did try out WASTE in 2003.

It seemed more like a chat client than something I could run any application over. It was more an application for doing certain defined things accessible in menus (the ones you mentioned) instead of being a generic means to create peer-to-peer networks.


If you want to do a different application, you can always take the WASTE code and do something else with it. Freenet is about as generic as P2P gets because it only handles storage. WebRTC should actually be a pretty good way to make P2P networks because you get a channel for sending data, doing realtime chat, audio/video codecs, all the HTML5 stuff for the UI, and JavaScript as a scripting language. As far as protocols and data standards go, I think the only reasonable thing is a protocol for status info, public key exchange, authentication, and listing the application protocols a peer supports. Everything else is too application-dependent.


As far as data standards go, I like Ethernet. I guess we all have our own preferences on how to do peer-to-peer. That's just the one I like the most. As for protocols and exchanging public keys, how about face-to-face or by postal letter? It's just a blob of text. Again, if the social networks involve only friends and family, why do we need to do everything over the public web? Does everything have to be complicated?

For networks with strangers, it's an entirely different ballgame. I have little interest in that can of worms. Way too complicated. That's for the geniuses who can create stuff like Freenet.

I guess I could look at the WASTE code, but I doubt I would find it simpler than the approach I've settled on. As I say, we all have our preferences. I like things very simple.


If they have any use for a command-line utility to manage the server, it should be named 'pitch'.


Or, you know, 'tent'.


From the docs: "Tent is a protocol for decentralized social networking." I'm not sure why this is interesting? It's neither the first nor the last such protocol, can someone enlighten me?


Perhaps the past discussions on this will help. http://news.ycombinator.com/item?id=4418904


This buzzed right around the time app.net was funded one month ago, http://news.ycombinator.com/item?id=4418904


I'm not aware of any other such protocols, can you name a few?


OStatus, BuddyCloud, Appleseed, Diaspora, etc. http://en.wikipedia.org/wiki/Distributed_social_network

Tent really really needs some justification docs.


OStatus, used by StatusNet, the software behind identi.ca. It even has Twitter compatible API, so why Twitter devs aren't flocking to it is a mystery.

There are also a few projects using XMPP, like BuddyCloud, Jappix, Movim, OneSocialWeb etc.


Because it's not a protocol, it's just a spec. If you want to develop developer interest, give them something to work on. The web was so successful because it provided developers with very specific things that they could work on. Browsers focused on HTML, web servers focused on HTTP.

I find that all of the open social initiatives are either too broad or too specific. OStatus is entirely too broad, from their FAQ:

> First, rather than being a stand-alone spec it builds upon several existing (and evolving) open web standards including: PubSubHubbub, Webfinger, ActivityStreams and Salmon.

So if I'm a developer I can't just learn OStatus, I have to learn 4 other specs as well. Diaspora has the opposite problem, it's too specific. Their recommendation for contributions is to fork their rails app. Developers don't want to fork working apps and fix bugs; they want to write their own from scratch.


OStatus is just about as short as it can be on its own. "The 4 other specs" _together_ aren't much longer than what is necessary _anyway_ for implementing the necessary aspects of a federated social network.

And as all of the protocols necessary already exist and are quite well-established, there are already parsing tools available for anyone to use freely. So development is really easy to get into when it comes to OStatus.

Using something that already exists is in this case much better than reinventing the wheel and rediscovering all implementation issues that have already been taken care of with protocols like XMPP or OStatus.


"So if I'm a developer I can't just learn OStatus, I have to learn 4 other specs as well. Diaspora has the opposite problem, it's too specific. Their recommendation for contributions is to fork their rails app. Developers don't want to fork working apps and fix bugs; they want to write their own from scratch."

Spot on.


Twitter devs aren't flocking to it because its userbase is quite small. It's hard to make a profit off a small userbase.


And why is the user base small? Hint: Circular dependency ;)


If that's what you think, then it's no wonder you consider the situation to be a mystery.


The user base is small because the user base is small.


I hope these guys are able to get some traction. Distributed social networks that can replace twitter and facebook are the only way forward.


I'm not so sure they are the way forward. They're clearly the way forward if you dislike being "locked-in". But is being locked-in always a bad thing? Nope.

We could easily have a distributed web search engine to replace google. It could run on everyones machine, and it'd be awesome. But what would be the advantage?

Also before twitter/facebook, we had email - a distributed social network. How is the "new" distributed social network going to be better than email?


Search engines and social networks are very different things. Build a better Google, and people will gradually switch. That's how Google got started. Build a better Facebook, and nobody cares because their friends aren't signed up.


Doesn't the same thing apply to Facebook when it started? Myspace, Friendster, and all other competitors had a ton more users and strong network effects back in 2004. Facebook offered a better experience though, and so users were willing to switch despite network effects (and even maintain multiple accounts for a period of time)


Not even remotely comparable. The vast majority of current Facebook users had no social networking accounts before they signed up for Facebook. Now, Facebook is entrenched.

That's not to say it was easy to make Facebook succeed, or that it would be impossible to make a service outcompete Facebook now. But it was certainly far easier to unseat the king back then than it is now.


> We could easily have a distributed web search engine to replace google.

"Easily?" Distributing things is hard. The claims of those who don't know this are to be treated with skepticism.

> It could run on everyones machine, and it'd be awesome.

Until we get something like "Trusted Computing," there will be a few people who will run servers that "cheat" somehow. This has been true for distributed systems over the Internet for over 3 decades now. (SMTP, Gnutella, Bittorrent, just to name a few.)


I'm sure its hard, but it exists already. http://www.yacy.net


> Also before twitter/facebook, we had email - a distributed > social network. How is the "new" distributed social network > going to be better than email?

Imagine you cannot use a word document created in windows cannot be used in linux or mac. Imagine YOU need to use windows just because other people use. Will this be good?

Choice of operating system is upto you and you must be able to exchange data across operating systems. That "interoperability" is needed for any good system. You don't need interoperability only if you want to monopolize the market.


>> We could easily have a distributed web search engine to replace google.

I'd like to hear more about that. An easy to build distributed system able to take the same computing/network load of google? Man, that would be awesome.


yacy.net


Hm, the page is not very clear about the scalability of this system. It only says the system performs 130k queries per day (~ 1.5 requests per second).

This page (http://searchengineland.com/by-the-numbers-twitter-vs-facebo...) claims that google search handles 34k requests per second.

I'm yet to see someone describing how you can easily create something completely descentralized that is able to achieve such a high performance.

And I'm talking only about speed. If you throw other issues like quality of results, privacy, dependability, how to get traction, and so on, the thing becomes insanely difficult.

If it was so easy to do this kind of stuff, someone would already have done it.


attach your images with your gf and send it to all you friends with email, and then everybody will post a comment, I mean email.


Sounds like someone is trying to reinvent the unhosted[1] protocols. Unhosted is used in relatively popular apps like owncloud[2].

[1] http://www.unhosted.org [2] http://www.owncloud.org


Tent is nothing like unhosted. Unhosted is about running apps entirely in your browser.


"Protocol" "Reference Implementation" "API"

These are the highly encouraging words I keep seeing from Tent that I never saw from Diaspora.


I suggest that, whenever you make a post like this, you indicate WTF the software does, like this:

We're pleased to announce version 0.1 of Tent, <insert description of WTF Tent is>.

The description should preferably indicate not just what it does at a low level, but why you'd want to use it. In this context, the description on the documentation page, "Tent is a social layer over HTTP using JSON.", doesn't motivate Tent's existence particularly well. Something like this would be more instructive:

Tent <why Tent exists> by adding a social layer to HTTP.


You lost me at "changes everything".


Are there any protections against spam? (unsolicited notifications)

Are there any authorization & authentication measures? (so you know a notification is really from whom it claims to be)

I see mention of HMAC in the overview, (how) does it solve those problems? I know basics of TLS, RSA - how is it related? I'd be grateful for hints. Also, I remember some alternative mentioned in old threads about Diaspora, which was reportedly devised by some security guys (?) and tried to have good security model - do you know the name of this project? Some GNU one?


Decentralizing would be an easier sell on a mass scale as a hardware solution. I can't imagine asking my parents to implement open source routing software but they bought a wireless router on their own.

Maybe Tent should think about a strategic partnership with a raspberry pi or another one of the many great cheap computing solution available today.

Write some basic stuff into it, and sell the device as a plug into your router and go social network device.


That's the FreedomBox or Tonido.

I think a $100/year cloud server is likely to be faster, more reliable, and easier to set up than a home server.


I agree, it would probably be easier to pay for a service if people really wanted it.


This protocol seems to miss a secure routing of messages inside of its network, insead offering point-to-point interaction. However, this will be required in order to build indeed a decentralised social network. Something that was developed in Tonika project: http://pdos.csail.mit.edu/~petar/5ttt.org/


Tent should take a page from Jabber and create gateways which will post for you on other social networks much like Jabber had for non-Jabber IM services.

That way you could have a backup of all your social data, stored on your server if you wanted but broadcasted to the other servers.


That website needs a major redesign. I can tell you that most people spend three seconds on a web page before they decide to move on or not, and there is no way a generic bootstrap website is going to grab anyone's attention in that three seconds.


This sounds a lot lie XMPP; Why not just extend that instead?


buddycloud is the one sane federated social network that is piggybacking on a protocol that has proven itself for over a decade. And I'm really expecting things from them now that they got incubator funding through Mozilla WebFWD.

I think the fundamental reason HN/Proggit and co. reject XMPP is because it's XML. Which is a ridiculous reason. Real-time streams over encrypted TCP seems to be the right way to go over forcing a protocol not meant for constant communication (HTTP) to do so.


This seems like a pretty clear example of HN comments being full of flat criticism and naysaying with very little in the way of helpful feedback.


Saw the link header bit and was instantly reminded of Riak - is that what the servers are running? Makes sense I suppose..


Is their documentation page made with Sphinx? If so, does anybody know that theme? It looks really nice.


It is http://twitter.github.com/bootstrap/

You can see it in source code.


Has anyone put up a public server yet?


you should check out http://secushare.org/


This has the potential to replace Facebook. And it might do just that with a couple of endorsements.


I don't know how you got that Tent is a competitor for Facebook. Tent goal is to be used by Facebook, Twitter, or any other social site so that they basically have an API to access the users data, so the users can have full control and ownership of the data.


Facebook will NEVER use Tent, you have got to be kidding...


? I didn't say it WILL be. It's goal is to be used BY social networks.. not replace them. Maybe FB won't use Tent, but perhaps whatever comes next will.


No, it hasn't that potential.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: