Hacker Newsnew | past | comments | ask | show | jobs | submit | holtalanm's commentslogin

i have never seen anything anywhere advocating for moving away from RSA. i'm curious to see what their sources are for this claim.



watched the whole thing. was very informative. Actually havent used RSA in any capacity in years (AES is a lot easier to use), but always viewed RSA as a battle-tested encryption method/alg. I suppose with anything there are ways to misuse it, and RSA appears to be really easy to misuse.


I would say that it’s fairly rare that RSA and AES are really competitors for the same category of uses, as they’re asymmetric and symmetric algorithms respectively.

I would suggest that anyone who doesn’t know about the differences between these things just not use cryptography directly and instead use things like libsodium which has sane defaults and a hard to misuse api.

If you don’t know cryptography fairly well, it’s a massive minefield.


In 2005, The NSA began recommended migrating away from RSA and onto ECC algorithms for its customers' classified use cases. [1]

In 2018, it paused that recommendation, essentially arguing that if you haven't migrated already, wait until the quantum-resistant algorithms are well vetted to avoid a second migration.

1. [https://en.wikipedia.org/wiki/NSA_Suite_B_Cryptography]


> in reality you still need server side state for useful features like logging out

im curious about this. normally 'logging out' just involves deleting the secure http-only cookie where the jwt was stored. is there something I'm missing here?


Access revocation: sometimes it's critical to block access to an issued token, without trusting the client to comply with revocation, especially for malicious cases.

Enforcing this implies to implement access control on each (critical) request, giving little advantage to a self contained token compared to a pure stateful signed session token.


With ordinary sessions you need to store all active sessions in server. Sessions might be long-lived, may be eternal.

With JWT you need to store forcibly terminated active sessions in server. Those sessions are short-lived. So basically it's empty map.

Another solution with token is to change server key and force all short lived sessions to reauthenticate. It is not very nice, but if that's an extremely rare scenario, it might be appropriate to get rid of checking each request while still supporting forcible logout.


Yeah, if you need that kind of control over token access, then im not certain a jwt is the right tool for the job. For most use-cases a short-lived jwt is fine, as it expires in a matter of minutes, or even seconds, depending on configuration.


The ability to logout existing sessions (typically either via a manual user action or automatically upon changing/resetting their password) is a desirable feature in essentially all applications where user accounts can be compromised.

You can kind of fake this by using a short-lived JWT and constantly refreshing it, but this:

1. Massively increases server strain and bandwidth usage

2. Has problems with users less reliable connections (they'll be randomly logged out all the time)

3. Makes "Remember Me" style features impossible (unless you use a server-side store for that, which brings us back to it not being stateless)

Here's a good graph on why $method to make JWTs work for sessions is bad: http://cryto.net/~joepie91/blog/2016/06/19/stop-using-jwt-fo... (note: for some reason the website doesn't support HTTPS :( )


> 1. Massively increases server strain and bandwidth usage

A short-lived JWT that fits into an HTTP Header is not going to _massively_ increase your bandwidth usage. At most, you will end up with a single refresh request every few minutes as each short-lived JWT expires.

> 2. Has problems with users less reliable connections (they'll be randomly logged out all the time)

Usually if your request failed due to a bad connection, the client wouldn't be designed to automatically log out the user. That would be just terrible UX.

> 3. Makes "Remember Me" style features impossible (unless you use a server-side store for that, which brings us back to it not being stateless)

Incorrect. A short-lived JWT tied to a refresh token allows for a remember-me style feature by checking account access when issuing a new JWT token.


AD: I wrote a library that can deal with this for JWT https://github.com/endiangroup/compandauth

The skiny is that you place a copy of some monotonic counter inside every JWT you issue, you keep track of the counter server side and compare with each request's JWT copy + some delta (which is the equivalent of maximum number of concurrent sessions you wish the user to have).


Set counter = 0 in DB. Put it into JWT. Increment the counter in DB to revoke the user. Compare counter in JWT < counter in DB. What's the problem?


then you're hitting the db on every request just to do auth.

if you _had_ to do that, I would put the counter into something like redis instead.


Don't you need to hit the DB anyway to fetch authorization data like user role? Clearly you aren't going to store it in JWT or you face the issue with invalidation. But fine, cache it in Redis. Problem solved.

10 timeout to reply. o_O


This conversation comes up all the time when discussing JWTs, and unfortunately I think the issue is usually way overblown:

1. I don't believe there are any real security issues regarding logout if JWTs have a sufficiently short expiration time.

2. The reason this issue comes up is because of compliance audits, who demand that as soon as the user logs out, that the supplied token becomes invalid. However, if the JWT is adequately discarded from the client, the fact that the JWT is still valid for another ~5-10 minutes or so is only a security risk if the token has already been stolen. The fact of the matter here is you really aren't protecting against a new attack vector with this "must immediately revoke tokens on logout" rule.

3. Despite my beliefs with #2 (and I'd love to hear an argument why this isn't valid), good luck trying to convince an auditor about that fact, who often love finding minor/mundane issues to justify their existence. So you'll still need to maintain a small blocklist, but the data in that list is usually very small (most users never log out these days) and can often be replicated in memory to each server.


I agree with your assessments, and your reply makes sense in the context of the above comments. Simply so that any readers arriving here that didn’t read the article arrive here, I just want to remind that these are not the fundamental complaints of the actual article itself, and are also unrelated to the proposed solution.


This is solved with a revocation list, which only needs to contain the tokens issued within the last ~5-10m for which there is a reason for revocation. Add to that a revocation list for access tokens, which are typically 24h.

The sum of both lists is vastly smaller and easier to manage than distributing session state and maintaining it server side for every single user.


I'm sure there are employees out there that have their self destruct scripts ready to go. If they are ever terminated they have 10 minutes of token validation time to blow everything up.


that would be insanely illegal.


I believe they are mentioning the fact that the server cannot unilaterally log the user out in a "naive" JWT-based implementation without storing and checking a token blocklist - which makes the session no longer stateless.


I can see that. I suppose when people say they need 'server-side session storage' I start thinking of app state, but in reality it could be as simple as storing a jwt refresh token that would be considered valid.


It matters in the context of folks that are trying to do a serverless architecture, sold on the idea that JWTs don’t require anything more than a function to issue auth


my question here is:

are we really concerned about console tools overreaching their telemetry? personally, I am not.

I would love to know why others think this is some kind of huge issue, without a bunch of 'what-if' scenarios.


I am. The terminal is one of the few places I am not being tracked. It’s also a portal into my most private data and activity, so if there’s one place I don’t want to be tracked, it’s here.

I do not long for a future where the terminal ecosystem resembles the state of the greater internet with regards to privacy and tracking. We’ve collectively watched it happen to almost every other segment of technology in the past 20-odd years, so it’s not far fetched to believe it could happen here as well.


are there documented cases of these cli tools abusing their telemetry? are they entirely used to pinpoint performance issues and bugs within the tools that implement this telemetry tracking?

if it is the former, i can see there being cause for concern. if it is the latter, this is just pure fear-mongering.


I liked the Gatsby comment/suggestion a lot better: a tool for automatically setting the do-not-track env flags for all different dev tools.


Because what I really want is a ton of different random bullshit enviroment variables next time I go to debug something :/


well, good luck getting buy-in from the cli tool devs, then? the other option requires absolutely zero buy-in from homebrew, gatsby, dotnet, or any other cli.


> In my experience it has never, ever been the JS rendering layer which has caused unresponsiveness in an application.

Implemented a undo/redo stack on top of Vuex once that worked on some very large data structures.

Got unresponsiveness after only ~3 changes to the data. Purely due to how Vuex checks state for changes. No network, no database; purely in the frontend client.

Ended up needing to freeze the state as I pushed it into the Vuex store, so Vuex wouldn't check previously pushed state for changes.

My point is, there are multiple places where, if you are building an app of scale, you can run into client performance issues.


Would you mind elaborating a bit on _how_ you implemented this? In my experience with Vuex and large datasets with dozens of stores, the devil is in the detail and how you change stuff matters a lot.

For example, you can't just create and keep a copy of a dataset / variable. It will remain reactive. You need to clone it. Failing to do so will indeed quickly clog up... everything :D


you don't have to clone it. cloning the object and putting it in Vuex will still result in it being reactive.

`Object.freeze` is what I used. This causes Vuex to not traverse the object for changes. in my case, the objects I was pushing into the Vuex state were essentially immutable once I pushed them in, so this did the ticket.

well, that, and only pushing partials of the entire state, so the object model didn't get too unwieldy. To get the total state, i just replayed the changes on top of the base state. base state was reset once the number of changes got to a certain size.


> Calling these things weird is fair enough but I can't help thinking this is code you'd never actually write outside of the context of a "Look how weird JS is!" post.

That is the whole premise of the site, though. They even say that these examples aren't common syntax or patterns before you start.


The site is called "JavaScript Is Weird", not "Weird Javascript", even if they tell you that the examples aren't common they're still saying that this weirdness is unique to JS. Which definitely isn't true in the case of basic floating point precision problems


> The site is called "JavaScript Is Weird", not "Weird Javascript"

am i being punkd?


The former is a general statement about Javascript itself as a whole while the latter is describing a set of examples.


Most of the comments are completely dunking on Amazon for this, though.


when you dunk 100 times you get a free tshirt!


best solution is to use int/long primary keys, with a uuid column that has a unique index. then the uuid can be used with public-facing apis.


Doesnt this just mean that 80% of orgs that were hit with ransomware attacks just didn't bother to fix their infosec, and got hit again because they left the same holes open to be exploited?

Fool me once, shame on you. Fool me twice, shame on me.


So ransomware already means they got into the system, they could open a new secret backdoor or completely tear down your security if they wanted to. Plus it takes time to identify the ransomware to undo/remove it, so in that time they could attack again. paying ransomware ransoms is just saying "pretty please don't do this again".


It can just as easily mean that the attacker found a second exploit after the first was resolved.


Since so many were hit by the very same ransomware group, it's likely that the attacker spotted a second exploit during the first attack. It's easier to spot things when you've already busted your way in and have the run of the place.

i.e. An attacker breaks into a system using one vulnerability, spots a few more vulnerabilities while snooping for data, files them away for future reference, extracts a ransom, and then repeats the process later after the victim fixes the first vulnerability but fails to address the others.

The takeaway lesson appears to be that, if you are hacked and fix the vulnerability that made it possible, you shouldn't stop there. You're marked as a target that pays and detailed information on your system is now out there. Even having fixed the first hack, you're more vulnerable than ever.


> Fool me once, shame on you. Fool me twice, you're not going to fool me twice.

- These Companies (probably)


Yes, but even more importantly it means they don’t have proper backups and disaster recovery.


Most likely.


almost like having competition drives innovation up and prices down.

too bad most ISPs have literal monopolies over entire regions of people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: