i've been implementing my wiki system in haskell... types are /far/ too rigid in many circumstances.

to link code from two libraries together, i had to serialize html to string and reparse it with another library, all because their html asts - though the same - used incompatible types of IO.

still going to stick with haskell, because it's the best way to work with pandoc... but it's difficult to trust for serious software development while several libraries work like this.

on the plus side, i can be relatively confident in the code if it typechecks... but this is no replacement for just running the code and correcting afterwards

haskell's laziness means it can't reap the optimization benefits of strict type systems either - such a silly language!

while it's fun to prove things in types, this is not the way to go to just get things done.

@jakeisnt huh! what about the laziness stops it from being optimised? i would have thought that would make it *more* optimisable since a compiler gets more options on when it could run code.

@zens @jakeisnt yeah you can't do superoptimization on a strict language because a strict language definitionally has extra guarantees about execution time and ordering. i think jake is probably just not using the right abstraction for the job, which is admittedly the cause of a lot of inefficiency when working with more expressive tools

@syntacticsugarglider @jakeisnt the impression i got, though it might be wrong, is the main reason haskell isn’t optimised very much is it just isn’t popular enough to merit the effort- not a technical limitation

@zens @jakeisnt i'm not an expert on haskell but lazy languages have potential for optimization that is actually confusingly/surprisingly effective, such as evaluators based on interaction nets and Lamping etc. proposals for rewriting rules that give the appearance of negative complexity for some functions when fusion occurs
see: medium.com/@maiavictor/solving
GHC is fairly conservative from a theoretical perspective, mostly because it doesn't really need to be faster and is quite complex

@zens @jakeisnt it leaves a lot on the table in a way that isn't mandated by the language at all, it's just that GHC already has a lot of involved optimization systems. those could probably be replaced in many ways with supercompilation by evaluation plus superoptimization at an evaluation level. I'm saying that just based on a vague understanding of Haskell's semantics and the received knowledge that GHC is complicated, but I presume such a more elegant compiler would basically be clean-sheet

@zens @jakeisnt i've seen @alcinnz comment on GHC's design at some length, and they could probably weigh in on this. However, GHC is honestly good enough in my experience, and while there's some fiddling necessary at times to ensure optimizations don't get missed due to the fact that there isn't really an overarching formalized way that they're implemented with demonstrable reliability, generated code generally has performance more than good enough for real-world applications.

Follow

@syntacticsugarglider @zens @jakeisnt Yeah, here's my page on the subject: adrian.geek.nz/haskell_docs/gh

My understanding is that the main bottleneck preventing GHC from doing more optimizations is that it's only compiling & optimizing a single file at a time BEFORE linking. It's pretty easy for GHC to determine where laziness isn't desired, and there's ways to turn off laziness if you need to.

@alcinnz @syntacticsugarglider cool! sorry if this feels like a pile on @jakeisnt , not my intent- when there’s something that contradicts my view of the world i like to find out if i am wrong. if there’s an argument that laziness makes optimisation harder i’d like to read it.

@zens @alcinnz @syntacticsugarglider not a problem, i'd rather take some criticism and learn something than take away nothing at all : )

Sign in to participate in the conversation
FLOSS.social

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).