Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Crystal and Ruby do not even remotely have the same semantics. They look similar but the similarity is extremely superficial. The extra semantics that Ruby has have a massive impact on performance, for even the most optimising implementations. This is not a reasonable comparison if you do not consider the 'slick as Ruby' part. No runtime metaprogramming! The entire Ruby ecosystem is built on runtime metaprogramming!

(Crystal is a fine language and compiler - but it's nothing to do with Ruby.)



Care to explain more? Having been using Ruby for about 8 years and Crystal for about 4, they actually have an extremely similar syntax and are also semantically very close. To the point where many Ruby scripts are completely valid Crystal, or at the very least require only a few changes.

I do think that people trying to compare Crystal to Ruby kind of miss the point though. Ruby as an interpreted language, even optimized with JIT compilation, will never match the performance you can get out of a true compiled language. By the same token, Crystal as a compiled language will never be as quick to develop with since you have to wait for your code to compile after each change.


> Care to explain more? Having been using Ruby for about 8 years and Crystal for about 4, they actually have an extremely similar syntax and are also semantically very close. To the point where many Ruby scripts are completely valid Crystal, or at the very least require only a few changes.

It doesn't have Kernel#eval. It doesn't have Kernel#send. It doesn't have Kernel#binding. It doesn't has Proc#binding. It doesn't have Kernel#instance_variable_get/set. It doesn't have Binding#local_variable_get/set. It doesn't have BasicObject#method_missing. It doesn't have BasicObject#instance_eval. I could go on. All these methods have extreme far reaching non-local implications on the semantic model and practical performance, and specifically defeat many conventional optimisations.

> To the point where many Ruby scripts are completely valid Crystal, or at the very least require only a few changes.

You can't even load most of the Ruby standard library without these methods!

And it doesn't matter if you use them or not. They're still there and they impact semantics and performance because the fact that you can use them affects performance. You can't even speculate against most of them as they're so non-local.

Rails and the rest of the mainstream Ruby ecosystem fundamentally depend on them.

> and are also semantically very close

Sorry I super disagree with this. They look similar. Dig into it just below the surface? Start to model it formally? Not at all. Method dispatch, which is everything in Ruby, isn't even close.

(Again, Crystal's great as its own thing, it's just not similar to Ruby's semantics. If you don't need Ruby's semantics or you can replicate them at compile time then maybe it's perfect for you.)


>> and are also semantically very close

>Sorry I super disagree with this. They look similar. Dig into it just below the surface? Start to model it formally? Not at all.

I think you're missing the point. If 90% of Ruby code works in Crystal unmodified (even if it's because the standard library had to be rewritten from scratch), then the programmer experience may well be quite similar, regardless of how fundamentally different they are if you model them formally.

Are Newtonian mechanics and Einstein's theory of general relativity "similar"? If you model them formally, they look nothing alike. But in 99% of practical situations in every day life, and even in the most precise experiments we could conduct for hundreds of years, they're so similar we can't tell the difference.


> If 90% of Ruby code works in Crystal unmodified

False, premise, since it's not the case at first place. 90% of your Ruby code will absolutely not work in Crystal unmodified.


That was an example, I was trying to offer chrisseaton a different notion of "similarity".

Another example: if 0% of Ruby code works in Crystal unmodified, but for 90% of code the transformation was extremely simple and mechanical like using curly braces {...} instead of begin...end and prepending $ to all variable names like Bash and PHP, they would still feel extremely similar in practice, albeit obviously less similar than the above example.

By contrast, Java and JavaScript are widely described as having very similar syntax, but it is rare to translate code from one to the other without require fundamental rethinking, because the relationship between JS objects, functions, and prototypes is so different from between Java objects, methods, and classes.


Depends on how you count. On a application level, no. On a class level, also no. On a method level, no but we are getting close. On a row level, possibly. On a token level, definitely.


> 90% of your Ruby code will absolutely not work in Crystal unmodified.

Don't nail me on exact 90%, but for me it does. Nothing rails related though. Good example: https://news.ycombinator.com/item?id=23437035

However, I agree that the fundamentals/underlyings are very different. It's far from being like a python 2 to 3 migration.


Agreed. Crystal looks like it has many positive characteristics, but having similar syntax has nothing to do with having similar semantics. Without constructs like missing_method, you cannot run practically any of the Ruby ecosystem libraries, including everything involving Rails.

Java and C also share a similar syntax, but that does not make that you can easily swap one for the other.


To be fair, method_missing has caused more nightmares and problems with debugging than probably any other feature in Ruby. I actively avoid using it, and even the Rails team has massively dialed back on its use in their libraries over the years...


I find this quite amusing because method_missing? has always been the difference between 'true' OO languages Smalltalk/Ruby and pseudo-OO language like C++; with the implication that true OO is better than pseudo OO for the ones making such distinction..


There has never been a "true" OO language. And if it there was, Smalltalk was not it. Alan Kay did coin the term, but Simula existed long before Smalltalk. The tree of languages that include C++, Java, and C# can be traced back to Simula while Smalltalk inspired Ruby. There is a distinct camp of "statically typed OO" (Simula and its children) and "dynamically typed OO" (Smalltalk and its children).

Yet none of this is the one true OO. All of it remains a way of describing a human mode of expression, and so is rightly subjective.


https://cs.brown.edu/~sk/Publications/Papers/Published/kf-pr...

Programming Paradigms and Beyond, Shriram Krishnamurthi and Kathi Fisler:

OO is a widely-used term chock-full of ambiguity. At its foundation, OO depends on objects, which are values that combine data and procedures. The data are usually hidden (“encapsulated”) from the outside world and accessible only to those procedures. These procedures have one special argument, whose hidden data they can access, and are hence called methods, which are invoked through dynamic dispatch. This muchseems to be common to all OO languages, but beyond this they differ widely:

* Most OO languages have one distinguished object that methods depend on, but some instead have multimethods, which can dispatch on many objects at a time.

* Some OO languages have a notion of a class, which is a template for making objects. In these languages, it is vital for programmers to understand the class-object distinction, and many students struggle with it (Eckerdal & Thune, 2005). However, many languages considered OO do nothave classes. The presence or absence of classes leads to very different programming patterns.

* Most OO languages have a notion of inheritance, wherein an object can refer to some other entity to provide default behavior. However, there are huge variationsin inheritance: is the other entity a class or another (prototypical) object? Can it refer to only one entity (single-inheritance) or to many (multiple-inheritance), and if the latter, how are ambiguities resolved? Is what it refers to fixed or can it change as the program runs?

* Some OO languages have types, and the role of types in determining program behavior can be subtle and can vary quite a bit across languages.

* Even though many OO aficionados take it as a given that objects should be built atop imperative state, it is not clear that one of the creators of OO, Alan Kay, intended that: “the small scale [motivation for OOP] was to find a more flexible version of assignment, and then to try to eliminate it altogether”; “[g]enerally, we don’t want the programmer to be messing around with state” (Kay, 1993).

In general, all these variations in behavior tend to get grouped together as OO, even though they lead to significantly different language designs and corresponding behaviors, and are not even exclusive to it (e.g., functional closures also encapsulate data). Thus, a phrase like “objects-first” (sec. 6.1)can in principle mean dozens of wildly different curricular structures, though in practice it seems to refers to curricula built around objects as found in Java.


FWIW, Crystal does have compile-time method_missing. Which obviously is less powerful than the runtime variant, but it is still possible to get fairly far in many practical usages.


To be clear, crystal does have method_missing.


Crystal’s has something with the same name but like almost everything it has completely different semantics. You can’t use it for the same things.


To dig into method_missing a bit more: when you call a non-existent ruby method on any object it has to check for and run a method called method_missing, which can contain arbitrarily complex code, on the object itself as well as every class in the inheritance hierarchy. Because ruby is a dynamic language with dynamic dispatch, you can't easily precompute the results of doing this.


I programmed a little in Ruby and IMO all this dynamic stuff is redundant(to be polite).


Yeah how fast is that compiler? If it's just another compiled language (rather than the kind of wicked fast compiled language like Go), my enthusiasm will be dampened...


If it has reasonable incremental compilation, it can take a few seconds to compile.

With good code structure, I see large Java projects compile small changes in seconds, even though compiling Java used to be a hog. You don't often rebuild from scratch during development, do you?


I often switch between feature branches, when working on more than one project in a repo with multiple related modules. If there's a change near the top of that dependency graph, I'm forced to not exactly rebuild from scratch, but still to rebuild quite a lot.


Right now I'm at the very same situation. Yes, it is frustrating. This is why things close to the top of the dependency graph should be small, well-tested, and rarely need changes. But when you still need to troubleshoot them, there's no way around recompiling a lot of stuff if you want these static guarantees :(


Bad language design, IMO. The language shouldn't make the writers in it worry about how to organize the code to speed up the compiler.


In my case it's not even a language proper; I was using JavaScript which gives you near-zero static guarantees.

I was fixing an issue in one common library; properly testing changes required rebuilding and restarting a number of containers. Unit tests only tell you so much; you need proper integration tests to see how certain things interact.

If I were used a statically typechecked language (e.g. TypeScript), I could have eliminated 50%, or maybe 75% of the testing, because the compiler would check things for me before runtime. It would be drastically faster to localize and fix the bug even if the compilation increased build times 10x.


Often I do because that's what CI does. This is pretty normal.

But the point is having a different view of what compilation means in the developer's workflow as a language designer. Having the engineer have to think about how to organize the code for the compiler is bad design unless that organization is built into the compiler. The compiler should reject programs that are not organized for optimal compilation. And the organization required at least does not impede understanding of the code (best if it improves it). This is Go's design imprimatur and it's critically important to the success of Go.

FWIW, I see large Java projects compile small changes take minutes to compile, even using hot-reload tools.

Figwheel in Clojure is not like this, however: they're doing something right there.


However you frame it, compile times are going to be longer the more static guarantees you need to check, and the longer the more dependencies a particular code change affects.

Making your code low-coupling if equally beneficial for the compiler and for the human to reason about the code. Hence modularization, limiting the visibility of parts, etc.

OTOH there are situations when you have to have a common interface which is used across the board. Imagine Java's `List` or `CharSequence`. If you touch it, you have to recompile all the innumerable uses of it. So the more pervasive the dependency is, the smaller and simpler and more fine-grained it should be. Java's `List` does not do a hugely good job in the compactness department; it's pretty stable, though. You want the same trait from your most foundational interfaces.


I agree that the comparison is unfair - but I think the larger point is that there are many simple bits of Ruby that can copy/paste to Crystal with an immediate performance boost. In fact, it'd be interesting to slowly re-write a Ruby codebase into Crystal.

Of course, harder than it sounds, lots of specifics to figure out.


You are absolutely right. I have written Scheme interpreters in both languages. Compare https://github.com/nukata/little-scheme-in-ruby/blob/v0.3.0/...

  # Cons cell
  class Cell
    include Enumerable
    attr_reader :car
    attr_accessor :cdr

    def initialize(car, cdr)
      @car = car
      @cdr = cdr
    end

    # Yield car, cadr, caddr and so on, à la for-each in Scheme.
    def each
      j = self
      begin
        yield j.car
        j = j.cdr
      end while Cell === j
      j.nil? or raise ImproperListException, j
    end
  end # Cell
and https://github.com/nukata/little-scheme-in-crystal/blob/v0.2...

  # Cons cell
  class Cell < Obj
    include Enumerable(Val)

    getter car : Val            # Head part of the cell
    property cdr : Val          # Tail part of the cell

    def initialize(@car : Val, @cdr : Val)
    end

    # Yield car, cadr, caddr and so on, à la for-each in Scheme.
    def each
      j = self
      loop {
        yield j.as(Cell).car
        j = j.as(Cell).cdr
        break unless Cell === j
      }
      raise ImproperListException.new(j) unless j.nil?
    end
  end # Cell
and they will make the point clear. Ruby and Crystal are different languages, but you can translate your code from Ruby to Crystal line by line fairly easily.

For the performance boost, see https://github.com/nukata/little-scheme/tree/v1.3.0#performa... which shows times to solve 6-Queens on a meta-circular Scheme as follows:

* Crystal 0.34.0: crystal build --release scm.cr: 2.15 sec.

* Crystal 0.34.0: crystal scm.cr: 9.88 sec.

* Ruby 2.3.7: ruby scm.rb: 84.80 sec.

Compiled (and complex enough) Crystal code runs 39 times faster than the equivalent Ruby code in this case.


Thank you for the relevant example!



Metaprogramming doesn't need to have a performance impact. VM languages like Java, C#, and JS allow you do define and modify code at runtime

JS you can redefine anything, Java support is pretty good, C# better support is coming with generators


Metaprogramming definitely has a cost, but it's one that you can minimise if you have the ability to either invalidate and recompile code at run time, or if you can perform extensive whole program analysis when ahead of time compiling.

The more extensive the meta programming you can do, the more work it is to implement this under the scenes. For example in Java you can change the visibility of fields, and you can load new classes, so that's not too hard to take account of, but in Ruby you can redefine methods, add refinements so they behave differently depending on where they are called, or radically change the inheritance hierarchy. Implementations like TruffleRuby can maintain high performance even with these features being used, but it's taken a lot of work to achieve that.


On top of Ruby metaprogramming being unique, V8 and Hotspot are amazing premium deluxe engines that have had more time and/or resources.


> Metaprogramming doesn't need to have a performance impact.

Optimising away the performance impact of most of the metaprogramming features I mentioned there requires truly heroic optimisations, beyond what has ever been used for any other language.

Some of them are even worse - I'm not sure there any way to optimise away the non-local effects of Proc#binding, which allows you to access local variables not lexically referenced.


Ruby's meta-programming capabilities is why optimising Ruby is so darn difficult. It's extremely powerful, but also complicates a lot of things.


I reason the difference is mostly when metaprogramming can happen. In Java, redefining or adding code is very explicit, it can't just happen whenever. Same with C#. And there are lots of rules. A class can't modify itself, and there's limits to what changes you can make. And to make changes you must have control over the "classloader" that loaded that code.

JS on the other hand can do most of what ruby does, to my knowledge. Objects are key value pairs so you're free to mess with them in virtually any way you please. You can also mess with the inheritance by altering JS prototype chains.

I don't think metaprogramming itself has much to do with the speed of Ruby, with my admittedly limited knowledge of this stuff


However, JS does not treat everything as an object as pervasively as Ruby does. A JS object and a JS integer are two different abstract data types. A Ruby number literal "1" is treated as an object of the Integer class. There are no separate abstract data type other than an object. All operators for an Integer can be overriden (in runtime), or perhaps, a specific object's methods can be overridden. Now granted, the runtime cheats and implements certain things in C ... but those can be overriden during runtime ...

If you want to find out more about the limits of what can be done to optimize Ruby, check out the Truffle project. That came out of someone's PhD dissertation on novel methods for doing JIT optimization for Ruby. It is sufficiently difficult and novel to warrant awarding a PhD for. Last time I heard, Truffle still could not run Rails.


A JS integer is actually an instance of the "Number" object, you can to a small degree alter fundamental behavior even with primitive types


No it is not. It is a primitive, this difference is well defined in the ECMAScript spec. Same for strings. An instanceof String is strictly not the same as a string primitive (and there are runtime consequences).


You need to enclose numbers to access the prototype: 1.toString is undefined, whereas (1).toString() is "1"


I stand corrected.


Ruby's "metaprogramming" is something Java, C# and JS don't do well -- dynamically redefining things during runtime. Everything in Ruby, including literals and operators, can be redefined during runtime, because everything is an object, and every message passed to any object can be redirected, filtered, transformed ad hoc. It's not just classes can be modified. Specific objects can be modified. Well-crafted Ruby code breaks things up into mixins that can be composed together. The closest comparison is one of Ruby's inspiration -- Smalltalk.

I think the most exciting optimization people have seen with Ruby is Truffle.

I don't regret the 14 years I put into writing Ruby professionally. I've used and abused metaprogramming, and it has shaped how I reason and architect things. I learned to appreciate well-designed, semantically-meaningful DSL. But I've moved on. I write server code with Elixir these days, and I'm exploring other ways of reasoning and writing code.


Why did you have to move on?


I didnt have to move on. I chose to. I had learned what I wanted from Ruby, and I started to realize that the problems I was facing was leading me to reimplement some of the things that OTP already offered. I was getting more interested in concurrent, resilient systems. It came about the time when it converged with my interest in permaculture (which is also about resilient, regenerative systems).


Java and C# essentially let you dynamically load/JIT code (not to be confused to the JIT-to-native virtual machine implementation they often run on); JavaScript is much closer to Ruby in that sense. It also gets really slow if you try to do any of those things extensively.


you obviously did not do metaprogramming in Ruby. Try it and you will never look at the "metaprogramming" capabilities of other languages in the same way.

no offence to JS, but JS and a proper programming language are not even the same species.


For real metaprogramming try Clojure. It's another world entirely.


Lisps are far more natural at metaprogramming; Clojure is the most most popular Lisp at the moment.


isn’t clojure like a lisp with extra steps?


Ruby has to have runtime metaprogramming because it has no other time but run time.

Compile-time metaprogramming is way safer and more performant, but for it you need a compiler.


But Crystal doesn't have those either.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: