Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ZeroTalent
11 months ago
|
parent
|
context
|
favorite
| on:
Mercury: Commercial-scale diffusion language model
Look into groq.com guys. some good models at similar speed to inception labs
sujayk_33
11 months ago
|
next
[–]
It's faster inference because of the Hardware (LPUs), here the question is about architectures (AR or Diffusions)
ZeroTalent
11 months ago
|
parent
|
next
[–]
I realize that, but it can be used now with many models in real-life situations. I just wanted to mention it if someone doesn't know it.
rfv6723
11 months ago
|
prev
[–]
SRAM doesn't scale with advanced semiconductor node.
Groq is heading to a dead end.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: