Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It mostly solves this problem:

- write code

- run tests

- commit code

- update CI

- commit

- CI broken

- update CI

- commit

- CI broken

- update CI

- ...

The workarounds for this are generally awful.

For Jenkins, you stage your own instance locally and configure your webhooks to use that. It's exactly as terrible as it sounds, and I never recommend this approach.

For Travis and Concourse (I think), you can use their CLI to spin up a runner locally and run your CI/CD yaml against it. It works "fine," as long as you're okay with the runner it creates being different from the runners it actually uses in their environment (and especially your self-hosted runners).

In GitHub Actions, you can use Act to create a Dockerized runner with your own image which parses your YAML file and does what you want. This actually works quite well and is something that threatens Dagger IMO.

Other CI systems that I've used don't have an answer for this very grating problem.

Another lower-order problem Dagger appears to solve is using a markup language to express higher-level constructs like loops, conditionals, and relationships. They're using CUE to do this, though I'm not sure if hiring the creator of BCL (Borgmon Configuration Language) was the move. BCL was notoriously difficult to pickup, despite being very powerful and flexible. I say "lower-order" because many CI systems have decent-enough constructs for these, and this isn't something I'd consider a killer feature.

I also _also_ like that it assumes Dockerized runners by default, as every other CI product still relies on VMs for build work. VMs are useful for bigger projects that have a lot of tight-knit dependencies, but for most projects out there, Dockerized runners are fine, and are often a pain to get going with in CI (though this has changed over the years).



My "workaround", if you can call it one, is to design things so they don't need the CI/CD server to get a build/test/deploy feedback loop. I should be able to do any stage of the pipeline without the server, and thus no code is committed until I know it is working. The pipeline is basically a main() function that strings together the things I can already do locally. If I need anything intelligent to happen at any stage of the pipeline, I write a tool to do it using Go or Python or something that I can write tests for and treat as Real Software. After fighting with this for many years, this approach has worked best for me.

I didn't dig deeply into the docs, but Dagger appears to be doing a multi stage pipeline locally. If that is the case, I wouldn't want that either. I use Concourse, which has very good visualizations of the stages, and if I used Dagger there, it would consolidate those stages into one box without much feedback from the UI. Also, with Concourse you can use `fly execute` to run tasks against your code on the actual server, without having to push anything to a repo.


Jenkins lets you replay a Pipeline, with changes, which is massively useful — removing the need to change things locally and commit.


Concourse has `fly execute` which makes the commit-push-curse problem go away. It's had it since 2015 or so.


Concourse also has `fly hijack` which is the baddest/funniest command of the decade. It's also very nice to use, instantly logging you in the remote container of a failed build so you can poke around an see what actually went wrong and try to run it interactively before fixing and re-executing. Much better than poking at things in the dark until you hit another issue...


> every other CI product still relies on VMs for build work.

Gitlab CI has dockerized runners? Works great!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: