> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API. We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.
However, I think a fundamental issue arises if you are going to pay people to see ads: What if someone forks Brave, and creates a browser which blocks all Brave ads, while pretending to click on them?
Neither of the two solutions I can think of are pleasant ones: you either need to somehow verify that that ads are viewed by a human (i.e. CAPTCHAs), or use DRM-like mechanisms to hide a token in Brave’s brinary, so that only “honest” browsers can get paid.
Any network with grants or revshares of tokens or other units of account that might exchange to money, and humans in the loop, will have fraud. Blockchain cannot stop it and has really nothing to offer yet on this front -- reputation on chain is a hope, some say a vain dream.
What Brave offers that's far better than today's joke of an antifraud system for ads is as follows: 1/ integrity-checked open source native code, which cannot be fooled by other JS on page; 2/ looking at all the sensors, even the ones without web APIs, to check humanity.
(1) requires SGX or ARM equivalent, widespread on mobile. JS by contrast cannot be sure of anything unless the antifraud script knows it runs first, and publishers cannot guarantee this in general or easily.
(2) is a material advantage over JS, which has only some but not all sensor APIs.
There are plenty of ways, Javascript is an amazing thing. For instance, anyone running this site can see that I typed this in character-by-character, and even had to hit backspace a few times. Dumping all of the text in at once would be suspicious. Not a clear indicator of a bot, but one indicator at least. I also moved my mouse and just expanded the text box so I could see my thoughts. Not something a bot would normally do. I'm also coming from an IP address that only ever posts to one account, not to multiple. My browser fingerprint only posts to one account. My browser fingerprint shows me on a Macbook using Chrome, and my cookies indicate I have a web browsing history. I upvote. I downvote. I post something, then engage in a follow-up discussion later on. These follow ups are upvoted by other accounts that match all or most of the above criteria.
My day job is information security, specifically working with a SIEM to correlate many diverse logs from many diverse systems and figure out what really happened using many pieces of individually benign data. None of these things are themselves indicators of bots, but the more you start to trip these rules, the more bot-like your behavior becomes. Eventually it paints a picture that shows no human could reasonably be behind an account that routinely posts two or more tweets at the same time, never engages in follow-ups, is only followed/liked by other suspicious accounts, and has a user agent of Python 3.7 coming from a source IP on aws.amazon.ru. You show them a captcha and if they fail or bail, you've got 'em.
> May I suggest that ``problematic'' speech is often simply what people don't like to hear?
To be fair, that's just a restatement of the original definition. If someone doesn't want to hear a certain kind of speech, then they find it problematic.
> Now, platforms definitely should choose sensible defaults, but one should be able to easily opt out of them.
Also to be fair, Facebook apparently caught the error and corrected it, and it does have plenty of ways to opt out as a user.
The biggest problem here seems to be the assumption that a sensible policy can be automated.
What about a scenario that describes an event occurring during a discrete amount of time during a set length of time? For example: A teacher says that he will give the class a surprise pop quiz at the beginning of class some day next week. How does reverse induction fail here?
> It added it had received the microphone data only as code rather than audio, and that it could match that code with audio data from a match.
That sounds funnily absurd to me. By that line of argument, even sound is not really audio. After all, it's being encoded as air pressure waves :-)
EDIT: Pardon my ignorance. Based on Google Translate's translation of their statement [1], it seems that they are using some kind of perceptual hashing which is quite interesting.
I guess they mean that they just have a summary of the sound, not enough to listen to it or anything. It would be a bit like just having the hash of a piece of data, not the data itself.
HSTS is trying to protect against a specific kind of Man-in-the-Middle (MITM) attack: when the man in the middle pretends that the website you are trying to access does not support HTTPS.
I believe trying HTTPS first wouldn't help: the MITM would refuse your connection, and your browser will try HTTP after that.
With HSTS, the server tells your browser that it is going to support HTTPS for a while. Now, if your first connection to server is secure (no MITM), from now on your browser will know that this particular domain supports HTTPS. So, it will know something fishy is going on if a MITM tries to pretend otherwise.
Trying HTTPS first would still help a lot in other cases, such as the one in the article. None of the super cookie HSTS techniques would have worked in the first place if the browser had just always tried to use HTTPS first.
Probably other unknown vulnerabilities could be averted by just trying HTTPS first too. Not doing so should be considered bad practice, with or without HSTS.
Around that time my cousin, who was three years older, was in high school. He was having considerable difficulty with his algebra, so a tutor would come. I was allowed to sit in a corner while the tutor would try to teach my cousin algebra. I'd hear him talking about x. I said to my cousin, “What are you trying to do?” He says, “I'm trying to find out what x is, like in 2x + 7 = 15,” I say, “you mean 4.” He says, “Yeah, but you did it with arithmetic. You have to do it by algebra.”
I learned algebra, fortunately, not by going to school, but by finding my aunt's old schoolbook in the attic, and understanding the whole idea was to find out what x is – it didn’t make any difference how you do it
For me, there was no such thing as doing it “by arithmetic,” or doing it “by algebra.” “Doing it by algebra” was a set of rules which, if you followed them blindly, could produce the answer: “subtract 7 from both sides; if you have a multiplier, divide both sides by the multiplier,” and so on – a series of steps by which you could get the answer if you didn't understand what you where trying to do. The rules had been invented so that the children who have to study algebra can all pass it. And that’s why my cousin was never able to do algebra.”