If you take a modern C++ or Rust compiler and think about optimizing it…
There’s lots of “low-hanging fruit” in the form of incrementality and parallelism.

Not truly “low-hanging” as in easy to implement; it’s actually extremely hard. But it’s easy to theorize about. It’s known to be *possible*, and capable of massive speedups in various cases.

But what about the rest? How much room is there for large speedups just by optimizing algorithms? To me that feels like much more of an unknown.

@comex I have this idea that compilers represent data incorrectly for modern platforms. Currently we have a vast ocean of tiny nodes full of pointers, and we just follow pointers all day. what we need is regular, tabular internal representations that we can throw onto tensor cores

@glaebhoerl @regehr @comex Also see Compilation On The GPU? A Feasibility Study[1], which is in some ways more conventional than co-dfns (it's a C-like language) and also in some ways more ambitious (the parsing handles arbitrarily nested depth without performance compromise).

[1]: dl.acm.org/doi/pdf/10.1145/352

@raph @glaebhoerl @regehr That ('Compilation on the GPU?') is awesome. Of course, such a simple compiler would also be extremely fast on the CPU, and the kinds of language rules that make modern language compilers slow would be… a lot harder to parallelize. But even doing that much on the GPU is really cool.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one