Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intro to Low-Level Graphics on Linux (betteros.org)
370 points by mabynogy on Aug 30, 2017 | hide | past | favorite | 108 comments


Personally I find the heavy use of ioctls in modern Linux APIs bit distasteful. Sure, its pragmatic and easy, but it really feels abusing the UNIX mantra of "everything is a file" when the only operation you can do for that "file" is some specific ioctls. I first noticed this in the very nice KVM article that ran on LWN few years back, and now again here. I really wish there was some more structured way of talking to the kernel, but I can understand that introducing a ton of new syscalls probably is not practical.


Nitpick: DRM files also support mmap(), which is how you map video memory into user space.

To your overall point though: do you think there's something better?

The operations required by user-mode drivers and APIs just don't map well to file operations. On the other hand, creating dedicated syscalls for graphics isn't very attractive either, because many of the ioctls are quite hardware-specific.

Using ioctls is a pragmatic solution, and I doubt you can really come up with something much better, though I'd be interested in being proven wrong.


I think something like bus1 might be neat, but admittedly I haven't thought that out very throughly. Of course bus1 is not even in mainline yet, and ioctl apis have been around for decades


> I really wish there was some more structured way of talking to the kernel

There's netlink. It's "more structured", in that it has a lot more structures.

Modules of common subsystems already support the same ioctl structs where this makes sense. Any mechanism replacing ioctls would only change the way you call it, it cannot reduce the inherent differences between modules. Well, one could use D-Bus or some generic property / key-value system, but this would only be generic on the surface.


The "Everything is a file in Unix" principle really hasn't been true since 1983 when Berkeley sockets were introduced. Network interfaces are neither files nor sockets. There are plenty of Linux concepts, which are yet something else.

Just as a side node to your ioctl criticism, I don't say ioctls are great.


> Personally I find the heavy use of ioctls in modern Linux APIs bit distasteful.

That's how things work in Windows as well. That's how userland talks to kernel mode drivers.

Well, except everything is an object.


> That's how things work in Windows as well.

GPU drivers are different. Basically, Windows have Direct3D already in the kernel, exposing relatively large API surface for the userland half of the driver.

https://docs.microsoft.com/en-us/windows-hardware/drivers/di...


Which has nothing to say about whether it's a good idea.


They have come up with plenty more ways to talk to the kernel.

Just look at the absolute abortion that is "netlink". Very deeply nested structures with self-referencing pointers and sizeofs everywhere. You are looking at 300 LoC for the most trivial operations unless you want to use a shitty (like extfs tools shitty) library.

I actually prefer the ioctl, it keeps the insanity in check of these structure astronauts that gave us netlink.


Yea, I think ioctl is probably the most elegant solution. The fact that hardware has a wide-variation in properties, requests, etc. mean some part of the interface is going to be inelegant and case dependent. ioctl at least puts all that variation behind a single system call.


Did Plan 9 try to fix this?


This interesting article also reminded me of another HN story (comment really) [1] about Carmack's interest in providing Plan 9 with great interactive graphics. It's IMO a great pity they didn't consider his input more seriously.

[1] https://news.ycombinator.com/item?id=14523222



Interesting write up, but it would be useful to explain how major graphics APIs fit into this (i.e. Vulkan / OpenGL) and how it all glues together (GPU drivers with KMS/DRM and higher level APIs).

Also note, that Mir is now basically gone, Canonical decided now to stop that rift and stick to Wayland.


Is there a decent write-up of how the graphics stack goes, from userland to the monitor? I have a really hazy picture of how framebuffers, drivers, compositors, GPUs, display servers etc. all fit together. I've read good tutorials about the various pieces, but don't really grok how they inter-relate.



That is a particularly superb overview. Thank you for sharing it.


(Author of that article here)

It's a bit outdated and honestly needs replacing at this point. Xplain[0] replaces parts of it, and some of my newer blog posts[1][2] replace other parts. I can also answer any questions people have about the stack.

[0] http://magcius.github.io/xplain/article/

[1] http://blog.mecheye.net/2014/06/xdg-shell/

[2] http://blog.mecheye.net/2016/01/dri/


Some update more focused on Wayland and its compositors would be especially useful.


This article was extremely complete and useful, please update it :)


Also, I'm curious about portable (i.e. across different Unixes) low-level graphics, if such a thing exists.

I assume that the interfaces discussed here are Linux specific, and the BSDs have something different.

Does POSIX have anything to say about graphics interfaces?

What is the lowest-level portable way to draw graphics? X11?

It's funny, because I always thought of X as being more of a windowing abstraction, rather than graphics in general. Perhaps I am making a distinction that does not really exist.

Obviously I am not a graphics guy :)


The BSDs and Solaris/Illumos have had DRM for years (where available), but it tends to lag the Linux version significantly because it's not trivial to port the code from Linux. I'm not sure where the compatibility breaks relative to the code given on the page, but most of them have some derivative of the original drm.h header that defines most of the structs and ioctls mentioned (e.g. [1] [2] [3])

[1] https://github.com/freebsd/freebsd/blob/master/sys/dev/drm/d...

[2] https://github.com/openbsd/src/blob/master/sys/dev/pci/drm/d...

[3] https://github.com/illumos/illumos-gate/blob/master/usr/src/...


Ah, thanks. Interesting.


> Does POSIX have anything to say about graphics interfaces?

Not at all. Writing 100% compliant POSIX code means sticking to the PDP-11 model of what an application is supposed to be, meaning either a terminal text application or a daemon.

Anything graphical related is a separate standard.

http://www.opengroup.org/standards/unix


I don't think there is any common low level way, even among the Linux flavors. Android's graphics stack, for example, is completely different and doesn't use any of this.

If you want portable you're going to want something like SDL probably.


Andriod is a weird deviation, and kind of annoying because the rift goes all the way to libc, which makes Android blobs incompatible without translation hacks like libhybris. If in case of GPU you can at least get something like Freedreno to avoid such hacks, but good luck getting open drivers for the modem and the like. So those blobs are a necessary evil for some devices to be usable, and there are none for glibc.


libhybris would still exist if Android had been running glibc. You still have architecture-specific stuff at the end of the day. Just like how XWayland exists to map X11 to Wayland, even though both use the same glibc.

Looking through the source of libhybris it looks like very little of it has anything to do with bionic itself.


I thought its whole point is to work around idiosyncrasies of bionic, and translate it into glibc proper. Or you mean that Android drivers architecture is very different from proper Linux drivers?


Yes completely different. This how the graphics stack look like on Android, a mix of Java, C and C++ code.

https://source.android.com/devices/graphics/

https://source.android.com/devices/graphics/architecture


Android can use Mesa, so that's not convincing.


No Android handset is being sold with Mesa instead of SurfaceFlinger.

Can you please prove me wrong?


Mesa and SurfaceFlinger do two completely different things. You can absolutely use Mesa on Android (the emulator does, for example). But just like on desktop you're typically using the GPU vendors' accelerated GL implementation instead of Mesa. But if the vendors driver uses Mesa, like the Intel GPUs, that still works fine on Android.


> the emulator does, for example

No it doesn't.

"Android Studio: The Mesa software renderer is deprecated. Use Swiftshader for software rendering"

https://stackoverflow.com/questions/39568455/android-studio-...

Intel has dropped out of Android market.


SurfaceFlinger is a compositor. Mesa is lower level. Search yourself (Android with Mesa).


Which means you cannot answer my question.


Which means you don't know what you are talking about. Did you search what I said? It's not the first time you do it for the sake of flaming.


The FOSS religion days are well behind me for worrying about flame wars.

The very fact that you cannot prove me wrong, tells it all.

Given that I also work with Android, are you so naive to think I need to search for whatever you want me to find?!?

Sure there is Mesa on the emulator, which is deprecated by the way.

Oh and in those Intel devices that are no longer being sold.


bionic is largely just a subset of glibc. There's not really any idiosyncrasies to workaround going from bionic to glibc (the hard part is the reverse as people commonly use non-portable glibc extensions)

libhybris is more things like converting Surfaceflinger calls into Wayland ones. Looks like there's some media, camera, and input stuff, too, which is all outside the realm of bionic/glibc.


Looks like FreeBSD is following Linux design for DRM/KMS: https://wiki.freebsd.org/Graphics


Probably because that's how the vendors distribute their proprietary drivers.


Actually Nvidia blob didn't start using DRM/KMS until very recently. I think it's more because of Mesa. It makes sense for FreeBSD to leverage the major open project which does heavy lifting for graphics.


With which ubuntu release did/does Wayland start? Wayland obsoletes Xorg? Or complements it?


17.10 will be the first version to use Wayland: http://www.omgubuntu.co.uk/2017/08/ubuntu-confirm-wayland-de...


Ok, some corrections first. Wayland is a protocol, not an implementation. Xorg is an implementation of a protocol, not the protocol itself.

Weston is the reference implementation of Wayland, but it is not the only one - KDE's KWin is another.

Xorg is the current most common implementation of the X11 protocol, however - again - it is not the only one (although almost all Linux desktop systems today use it) and there are other implementations.

So how do X11 and Wayland relate to each other? Well, they are both protocols for providing a windowing system. They each do things differently, with pros and cons to each approach. They are different enough so that one can be implemented on top of the other, so compatibility should not be a problem, although obviously features are lost or crippled when doing so.

There is this misconception that Wayland obsoletes X11, but they are very different systems designed by different people and worked by (mostly) different developers that only superficially provide the same features (showing windows on screen) that - at least personally - i do not see that happenning any time soon, if ever. Personally the way i see it is similar to asking if Atom (or VSCode, if that is your poison) obsoletes Emacs (or Vim).


Weston and KWin are Wayland compositors. Wayland should obsolete X11 (it's not a misconception, it's one of the primary goals), but it's a humongous task, especially from applications side. And Wayland is not yet feature complete to replace all common use cases. For example it's still lacking standard protocol for screen recording and screenshots, that fits its security model.


> it's not a misconception, it's one of the primary goals

That is what Wayland's developers want, but that doesn't mean it will happen :-P


I'll also miss my XMonad...


Also, there are no guarantees that any mainstream browser will ever use the Wayland protocol directly without going through an X compatibility layer like XWayland. And XWayland consists of most or all of the X11 code base.

Yes, Firefox and Chrome are open source, but they're atypical of open source projects in that (a) the amount of implementation effort required to make the kind of changes under discussion here are much larger than for typical open-source projects and (b) institutional users and knowledgeable (or simply well-advised) individual users of Firefox and Chrome won't use your fork unless you have a credible story about how you can protect them from 0days. I know of no fork of either Firefox or Chrome that has ever had a credible story, but I've spent only a couple of hours looking.

Google has their own replacement for X11 which they use in some Chromebooks, which reduces the probability that they'll throw in behind Wayland.

Not every OS install needs a mainstream browser (in-vehicle entertainment systems do not seem to: they're heavy users of Wayland) but if your OS install does, and you're waiting for Wayland (more precisely, software that uses the Wayland protocol) to replace xorg or X11, you might be waiting a long time.

"mainstream": provides a decent experience on most popular web sites including e-commerce sites.


> Also, there are no guarantees that any mainstream browser will ever use the Wayland protocol directly without going through an X compatibility layer like XWayland.

There's already a Wayland version of Firefox available: https://firefox-flatpak.mojefedora.cz/

Relevant bug report: https://bugzilla.mozilla.org/show_bug.cgi?id=635134


OK, even though "As expected it is not stable nor usable yet"[1], that make me more optimistic. Thanks.

[1]: https://project-insanity.org/blog/2017/02/14/try-out-firefox...


Aren't Google using Wayland in ChromeOS? If not, it's pretty bad and just another rift like their SurfaceFlinger on Android.

I'm also waiting for Firefox to start supporting Wayland without X11, but it's progressing really slowly.

Besides browsers another very major work is required for Wine. It's not even moving anywhere now, since developers aren't sure how to address it. They are too dependent on X11 for windows placements and etc.

So XWayland will need to be adequate, to address these cases, including gaming, until the time major projects will complete all that work.


Google is only using Wayland in ChromeOS for their Exosphere component to embed Android apps.

https://bugs.chromium.org/p/chromium/issues/detail?id=549781


>Aren't Google using Wayland in ChromeOS?

Not the last I heard (579 days ago):

https://news.ycombinator.com/item?id=10996545


> Aren't Google using Wayland in ChromeOS? If not, it's pretty bad and just another rift like their SurfaceFlinger on Android.

Except SurfaceFlinger predates Wayland by years. Kinda hard to use something that doesn't exist. ;)

At the time the complaint was that Android didn't use X11 but I think it's safe to say that was a good decision at the time. As for merging them at this point they both have too much inertia combined with some fundamental design differences that'd be hard to reconcile.


Their main failure wasn't just the fact that they didn't collaborate with Wayland (which wasn't a lot later by the way), but the fact that they didn't want to work on big problem that benefits Linux as a whole (replacing X11), but made something that benefits only them, and that created a huge and damaging rift. That they can be blamed for.

In contrast, Nokia didn't want to make such rift, and until it went bust, used common graphics stack (with SailfishOS becoming Meego descendant, using Wayland).


> (which wasn't a lot later by the way)

Wayland was started the same year that Android shipped and Wayland didn't have it's first release until 4 years later. That's obviously way too late for Android to have used it or collaborated on it.

> but the fact that they didn't want to work on big problem that benefits Linux as a whole (replacing X11), but made something that benefits only them, and that created a huge and damaging rift. That they can be blamed for.

At the time nobody wanted a replacement to X11. That was a big complaint against Wayland, too.

Getting into mailing list wars and trying to make everyone happy ensures you end up like Nokia - too late to matter and with no more money. That's not helpful to anyone.

Also please explain how Google created a "huge and damaging rift"? What damage resulted from this at all?


> Also please explain how Google created a "huge and damaging rift"? What damage resulted from this at all?

Hardware becoming Android only. See https://lwn.net/Articles/504865/

A major part of that problem is that rift that created incompatibility and prevented reusing of the existing progress. That was the whole reason for making libhybris, to unblock that issue.


That article is long outdated. Mainline and Android kernels have mostly converged at this point.

But that's also wholly unrelated to the specific context here which was Wayland vs. SurfaceFlinger.

Wayland was your "huge damaging rift" here if anything. Divorcing itself from both xorg and surface flinger, for no particular reason at the time in the case of the later.


The article doesn't focus on drivers, but idea is the same. Android did create the rift, and it's still here. It's new "Windows" in hardware compatibility sense.


> Getting into mailing list wars and trying to make everyone happy ensures you end up like Nokia - too late to matter and with no more money. That's not helpful to anyone.

Not arguing about the rest of your post, but the above is incorrect.

Nokia didn't go bust because they tried to cooperate with the rest of the world. You can be cooperative and still say no to people.

If I'm not mistaken Android was developed in submerged mode for a long time, for business reasons.


Well you can also look at how attempts by Google and others to upstream went initially. Mainline outright rejected some of the core tenants of what Android was doing just because it hadn't been an issue on desktop/servers. Things have since slowly changed, but even stuff that shouldn't have been controversial (like ashmem) still took years and when mainline finally added an equivalent (memfd_create) it is just incompatible enough in core behavior that Android can't switch to it.


It was the other way around, it went badly because they didn't collaborate from the start. Android was a huge code drop when it was published and upstreaming wasn't exactly their primary focus.


Vulkan & OpenGL are just really fast and complicated ways of putting pixels into the buffer you get from Wayland, DRM, etc... It's somewhat orthogonal.


Yeah, I get that eventually they just plug into lower levels, but there is still some gluing of it all together that parts of Mesa address.


Yeah the specifics I can hand wave about but would like more nice details. You get a "native" window surface from Mesa from DRM (the GBM API is one choice here I believe). You pass these returns to EGL for your display, native window, and surface. You then bind the OGLES2 API and you're of to the races.


> you pass these returns to EGL for your display, native window, and surface.

EGL if you use OpenGL, and WSI if Vulkan, if I understand correctly.


Using the Linux kernel API is too high level :) Back in 1999 I wrote a simple 3D engine that hooked into a lower level API: the code was calling directly into the VESA BIOS Extensions of the graphics card. This worked on Windows 98 I think, but not NT4. https://en.wikipedia.org/wiki/VESA_BIOS_Extensions


You used BIOS calls ? Luxury , I used VGA registers directly to get vga mode X back in the days. http://www.jagregory.com/abrash-black-book/ now that I call low level.


You had a VGA controller? That's nice. I had to design my own back in the days. https://github.com/a3f/r3k.vhdl/tree/Demo-v1/vhdl/io/vga


You have an FPGA ? That's also nice, all I have is a bunch of resistors and a MCU :) https://hackaday.com/2014/02/23/the-bitbox-console-gets-upgr... (nice project btw !)


You win ;)


These articles always make me appreciate the work that must go into something like SDL or OpenGL and the like. It's easy to take these things for granted, but if this is the stuff that they have to make everything work, then I have to give some props and respect to the authors.


A nice way to give value to it, and at the same time a learning exercise, is to try do implement software rendering like we were doing in the MS-DOS days.

For something more modern, grabbing an ESP32 coupled with a LCD screen, for example.


I made my first serious attempt almost as soon as I joined my university, CompSci. First attempt took a long time, as I did not really know how matrices applied to computer graphics, so I did all operations again and again, when they were required.

My first rotating cube "exploded" due to numerical errors, as I was modifying the geometry instead of using matrix projections.

That made me appreciate the linear algebra classes much more.


great write-up, fbdev is straightforward but the kernel is phasing it out in favor of DRM, I hope DRM will maintain a fbdev emulation at least. it's sad SDL2 did not have fddev support either as most people are fond of doing opengl/2d/3d-accleration these days.


As much as I would want to push people away from the fbdev API (doesn't support multimonitor, page flipping, cursor planes, etc.), it will be supported in an emulation layer for new DRM-based devices. Mostly for fbcon, but still supported.

https://github.com/torvalds/linux/blob/master/drivers/gpu/dr...


Wouldn't removing it break the "never break the user space" promise Linux has?


No. The "never break user space" thing (AFAIUI, at least) is about not changing existing behavior. Removal of features is a different thing altogether which usually only happens after long deprecation periods or when there's nobody around to support the code.


the issue is that Linus works at LF which are sponsored by big companies(intel, etc), whose focus is on non-embedded devices, framebuffer is extremely useful for low end embedded products that have no GPU and does not need one either, there are more low-end-embedded-graphic-devices than intel/PC/data-center machines


Does "DRM" here mean something other than the "DRM" most of us are used to?


Direct Rendering Manager: https://en.m.wikipedia.org/wiki/Direct_Rendering_Manager

Very different from Digital Rights Management.


Phew, I thought Digital Rights Management was snaking it's way into every rendering operation on every machine. I'm sure advocates wouldn't mind that.


Fear not, HDCP makes an effort to usher in your dystopia:

https://en.m.wikipedia.org/wiki/High-bandwidth_Digital_Conte...


DRI/DRM too me seems overly fiddly for what we had before.

Things just seem to break in nasty ways if i do not manage to get the kernel and userspace stuff to line up within a certain range of minor versions, and i can't seem to find a good straight list of just what those lineups are.

Effectively DRI/DRM seems to be developed for the major dev, by the major devs, and screw anyone and everyone that do not read every damn mailing list and IRC channel with religious fervor.

Between that and the udev stuff i fear that Torvalds have gotten too trusting of his sub-system maintainers in recent years, thus resulting in interface churn because both the kernel and userspace parts are done by the same people with no regard for anyone but themselves.


I think Linus is aware of the problems: http://lkml.iu.edu//hypermail/linux/kernel/1404.0/01331.html


Kinda, but the only time Sievers' antics come up is when they impact the kernel (and its developers) directly, not when it makes userspace more complicated to maintain (unless you happen to be on first name basis with the maintainers of said sub-systems).


Anyone remember SVAlib? I wish it was still around. Maybe it should be ported on top of OpenGL and made cross platform. Would love to have something similar on macOS.


Yes, having to make applications setuid.

I used it for a while, if I remember correctly I even wrote a pixmap loader.

Why not just use SDL, or would you like to port old games?

Nowadays on macOS I would just use SpriteKit.


To me Wayland is basically that for the GL generation...


That takes me back, fbdev was pretty useful for embedded linux configured with no X, dedicated Qt apps... or even just for putting up the all important custom boot splash screen.


The frontpage mentions Wirth's Law: "software is getting slower more rapidly than hardware becomes faster". https://en.wikipedia.org/wiki/Wirth%27s_law


Tangential question from an embedded-but-not-Linux developer:

The structs use the types like __u32 or __u16. I know the kernel defines its own types for internal usage[1], but why redefine types exposed to user-space? Why not use C99 <stdint.h> types?

Is it only historical?

And why are they double-underscore?

[1] because... reasons? I never really understood why


Generally speaking when you interface with a c library you should use the same types it uses. So while you can like get by with `stdint.h`'s `uint64_t` on _most_ platforms you hurt portability.

Example:

This is because `stdint.h` isn't 100% compatible with _every_ platform Linux supports. This gets in hair brained definitions of pointers lengths on some obscure platforms. For example SPARC64's `long`, `long long`, `void\`, and `uint64_t` aren't all the same size (`long long` is `uint64_t`, while void\ and `long` are `uint32_t`, but in kernel land `void\*` is `uint64_t`).

Just like C11 Atomics can fail on ARM and PPC under some scenarios so the kernel doesn't use these as well.

Its my understand the Kernel supports more platforms then the C standardization committee xD

---

So the kernel likes to define its own primitives, and when you interface with it its is generally best to just use those definitions. Or if you want to use _standard_ types, you have to understand you are hurting portability.


Thanks for your explanation :)

One thing I still don't understand though:

> This is because `stdint.h` isn't 100% compatible with _every_ platform Linux supports. This gets in hair brained definitions of pointers lengths on some obscure platforms. For example SPARC64's `long`, `long long`, `void` , and `uint64_t` aren't all the same size (`long long` is `uint64_t`, while void\ and `long` are `uint32_t`, but in kernel land `voidx` is `uint64_t`).

In these cases, you just defined the variably-lengthed types (void* ) using the fixed-length types (uint64_t). This would just demonstrate that the fixed-length types are absolute, and IMO then good candidates to base absolute-sized types on. And yet that's still not good enough?

A bit of googling brings up the book Linux Device Drivers 3rd Ed (2005)[1], in which chap11 specifically addresses the topic of types. It argues against the usage of standard C types (int, long, etc) specifically because of their size variability. That's all well and good, but it doesn't make any mention of stdint.h!

A hypothesis comes up: maybe stdtint.h is meant to be a user-space header exposed by the system/compiler, and the kernel, having to be entirely bootstrapped, can't even depend on that? So it just redefines the types for itself? With the side-effect of exposing those to userspace at the API?

[1] https://lwn.net/Kernel/LDD3/


Ah, found the explanation:

The problem with that idea is that the kernel cannot count on those types being consistently defined for all configurations, and cannot create its own definitions for the standard types. So the kernel/user interface must continue to be defined using kernel-specific types (__u16 and such).

From: https://lwn.net/Articles/113349/

And as close to Word Of God as I can get:

http://yarchive.net/comp/linux/int_types.html


Why is you solution to variable sized pointers to reimplement what the kernel already exports? Surely with less well tested code.


What platforms are we talking about? Anything actually used by consumers or business or is it all just stuff that has esoteric support by hobbyists and those creating custom and reviving super old stuff.

I would imagine that if someone's platform doesn't support C11 at this point it pretty much hasn't been updated in 7 years and pretty much can't custom build GCC. I mean I know how to build GCC on a Sega Dreamcast so it has to be more esoteric than that.

Also, not supporting C11 probably also means not supporting lots of other things that modern applications might want like Vulkan, IPv6 or multiple threads.


I notice there are no pictures


Just how low level _are_ these graphics?


Quiet a bit off topic, but this website is a joy to browse just for it's simplicity, high visibility, and speed. Just good ole basic HTML & PHP pages without a ton of overhead.


> high visibility

The moment it loaded my eyes felt out of focus and everything had a certain fuzziness to it.

The color combinations don't seem to agree with font smoothing, at least not on OS X w/ Safari on a 3440x1440 monitor


Firefox on windows, the font color and background color don't look like they have enough contrast


I used "Inspect Element" to change the body{background-color} and the body{color} because I didn't like the combination.


> high visibility

Really? I found it quite difficult to read due to the text color being too close to the background color.


It's similar to my preferred terminal settings. Dark background, light grey text, some brighter colors to highlight important things.


For me, light fonts on dark backgrounds are very difficult to read, I quickly get a lot of after images making reading the text very difficult. I was always fine with green and amber monochrome screens back then - perhaps my eyes just were much younger. I have not been able to find out what changed that for me, unfortunately.


A monitor with deep blacks seems to show this well, but with high gamma or bright blacks it looks bad. I have two monitors and it looks way better on one than the other.


One would expect some images in a graphics tutorial...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: