@glaebhoerl I know this is one of the points he makes in passing in the article, but LLVM doesn't even model SIMT convergence correctly. So besides not having much shared GPU-specific open source code to build on, you're also starting from a semantic disadvantage rather than neutral ground.
@glaebhoerl I know this is one of the points he makes in passing in the article, but LLVM doesn't even model SIMT convergence correctly. So besides not having much shared GPU-specific open source code to build on, you're also starting from a semantic disadvantage rather than neutral ground.