Hacker Newsnew | past | comments | ask | show | jobs | submit | mkeyhani's commentslogin

Because Facebook, unlike say a subreddit, is not a a community formed around a certain topic. There is no 'we' in particular.


Isn’t the “we” whoever developed and hosts the service?

The idea that you can only limit a community if it’s dedicated to a specific topic is non-sensical.


I am not sure if I understand your argument.

If whoever develops and hosts the service gets to limit what is allowed, then why Google shouldn’t censor its search results?


They don’t quite know:

> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API. We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.

https://www.blog.google/technology/safety-security/project-s...


I really like the idea behind Brave.

However, I think a fundamental issue arises if you are going to pay people to see ads: What if someone forks Brave, and creates a browser which blocks all Brave ads, while pretending to click on them?

Neither of the two solutions I can think of are pleasant ones: you either need to somehow verify that that ads are viewed by a human (i.e. CAPTCHAs), or use DRM-like mechanisms to hide a token in Brave’s brinary, so that only “honest” browsers can get paid.


Any network with grants or revshares of tokens or other units of account that might exchange to money, and humans in the loop, will have fraud. Blockchain cannot stop it and has really nothing to offer yet on this front -- reputation on chain is a hope, some say a vain dream.

What Brave offers that's far better than today's joke of an antifraud system for ads is as follows: 1/ integrity-checked open source native code, which cannot be fooled by other JS on page; 2/ looking at all the sensors, even the ones without web APIs, to check humanity.

(1) requires SGX or ARM equivalent, widespread on mobile. JS by contrast cannot be sure of anything unless the antifraud script knows it runs first, and publishers cannot guarantee this in general or easily.

(2) is a material advantage over JS, which has only some but not all sensor APIs.

For more on the joke of antifraud adtech today, please see https://www.slideshare.net/augustinefou/state-of-digital-ad-... and https://twitter.com/acfou's other work.


Eliminating all bots is not desirable. Many of them are useful and interesting.

Eliminating the bad ones with few false positives is hard.


Bots shouldn’t be allowed to follow people.


I'm surprised to see comments like this on HN.

Technically speaking, I'm not sure it's even possible to tell for sure which accounts are bots and which aren't. How can they tell?


There are plenty of ways, Javascript is an amazing thing. For instance, anyone running this site can see that I typed this in character-by-character, and even had to hit backspace a few times. Dumping all of the text in at once would be suspicious. Not a clear indicator of a bot, but one indicator at least. I also moved my mouse and just expanded the text box so I could see my thoughts. Not something a bot would normally do. I'm also coming from an IP address that only ever posts to one account, not to multiple. My browser fingerprint only posts to one account. My browser fingerprint shows me on a Macbook using Chrome, and my cookies indicate I have a web browsing history. I upvote. I downvote. I post something, then engage in a follow-up discussion later on. These follow ups are upvoted by other accounts that match all or most of the above criteria.

My day job is information security, specifically working with a SIEM to correlate many diverse logs from many diverse systems and figure out what really happened using many pieces of individually benign data. None of these things are themselves indicators of bots, but the more you start to trip these rules, the more bot-like your behavior becomes. Eventually it paints a picture that shows no human could reasonably be behind an account that routinely posts two or more tweets at the same time, never engages in follow-ups, is only followed/liked by other suspicious accounts, and has a user agent of Python 3.7 coming from a source IP on aws.amazon.ru. You show them a captcha and if they fail or bail, you've got 'em.


You could just fudge the numbers a little bit and have the system say "~X human, ~Y bot followers".


BOTS MATTER


May I suggest that problematic speech is often simply what people don't like to hear?

Of course, I think it's their right to choose what they want to hear. But that does not mean I defend censorship.

How to filter the content they consume should be every individual's choice.

Now, platforms definitely should choose sensible defaults, but one should be able to easily opt out of them.


> May I suggest that ``problematic'' speech is often simply what people don't like to hear?

To be fair, that's just a restatement of the original definition. If someone doesn't want to hear a certain kind of speech, then they find it problematic.

> Now, platforms definitely should choose sensible defaults, but one should be able to easily opt out of them.

Also to be fair, Facebook apparently caught the error and corrected it, and it does have plenty of ways to opt out as a user.

The biggest problem here seems to be the assumption that a sensible policy can be automated.


The backwards induction argument assumes the time of execution is a discrete variable.

However, at least as far as we know, time seems to be continuous.


What about a scenario that describes an event occurring during a discrete amount of time during a set length of time? For example: A teacher says that he will give the class a surprise pop quiz at the beginning of class some day next week. How does reverse induction fail here?


> It added it had received the microphone data only as code rather than audio, and that it could match that code with audio data from a match.

That sounds funnily absurd to me. By that line of argument, even sound is not really audio. After all, it's being encoded as air pressure waves :-)

EDIT: Pardon my ignorance. Based on Google Translate's translation of their statement [1], it seems that they are using some kind of perceptual hashing which is quite interesting.

[1]: http://www.laliga.es/noticias/nota-informativa-138


I guess they mean that they just have a summary of the sound, not enough to listen to it or anything. It would be a bit like just having the hash of a piece of data, not the data itself.


Thanks. You are right. I stand corrected.


HSTS is trying to protect against a specific kind of Man-in-the-Middle (MITM) attack: when the man in the middle pretends that the website you are trying to access does not support HTTPS.

I believe trying HTTPS first wouldn't help: the MITM would refuse your connection, and your browser will try HTTP after that.

With HSTS, the server tells your browser that it is going to support HTTPS for a while. Now, if your first connection to server is secure (no MITM), from now on your browser will know that this particular domain supports HTTPS. So, it will know something fishy is going on if a MITM tries to pretend otherwise.


Trying HTTPS first would still help a lot in other cases, such as the one in the article. None of the super cookie HSTS techniques would have worked in the first place if the browser had just always tried to use HTTPS first.

Probably other unknown vulnerabilities could be averted by just trying HTTPS first too. Not doing so should be considered bad practice, with or without HSTS.


Especially there is no reason, if I type news.ycombinator.com in my address bar to expand with http:// instead of https://


Here's Feynman's take on learning algebra:

Around that time my cousin, who was three years older, was in high school. He was having considerable difficulty with his algebra, so a tutor would come. I was allowed to sit in a corner while the tutor would try to teach my cousin algebra. I'd hear him talking about x. I said to my cousin, “What are you trying to do?” He says, “I'm trying to find out what x is, like in 2x + 7 = 15,” I say, “you mean 4.” He says, “Yeah, but you did it with arithmetic. You have to do it by algebra.”

I learned algebra, fortunately, not by going to school, but by finding my aunt's old schoolbook in the attic, and understanding the whole idea was to find out what x is – it didn’t make any difference how you do it

For me, there was no such thing as doing it “by arithmetic,” or doing it “by algebra.” “Doing it by algebra” was a set of rules which, if you followed them blindly, could produce the answer: “subtract 7 from both sides; if you have a multiplier, divide both sides by the multiplier,” and so on – a series of steps by which you could get the answer if you didn't understand what you where trying to do. The rules had been invented so that the children who have to study algebra can all pass it. And that’s why my cousin was never able to do algebra.”

(from What Do You Care What Other People Think?)

(Video: https://www.youtube.com/watch?v=VW6LYuli7VU)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: