Yes if you remove all frivolity I’m sure the joke will be funnier
Yes if you remove all frivolity I’m sure the joke will be funnier
I definitely agree on the last point. Personally I like languages where I can get the compiler to check a lot more of my reasoning, but I still want to be able to use all the memory management techniques that people use in C.
I remember Jonathan Blow did a fairly rambling stream of consciousness talk on his criticisms of Rust, and it was largely written off as “old man yells at clouds”, but I tried to make sense of what he was saying and eventually realised he had a lot of good points.
I think it was this one: https://m.youtube.com/watch?v=4t1K66dMhWk
That’s what std::move
does, and you’re right that it’s quite an ugly hack to deal with C++ legacy mistakes that C doesn’t have.
I say move semantics to refer to the broader concept, which exists to make manual memory management safer and easier to get right. It’s also a core feature of Rust.
Also I’m talking about parametric polymorphism, not subtype polymorphism. So I mean things like lists, queues and maps which can be specialised for the element type. That’s what I can’t imagine living without.
I would have said the same thing a few years ago, but after writing C++ professionally for a while I have to grudgingly admit that most of the new features are very useful for writing simpler code.
A few are still infuriating though, and I still consider the language an abomination. It has too many awful legacy problems that can never be fixed.
The only conceivable way to avoid pointers in C is by using indices into arrays, which have the exact same set of problems that pointers do because array indexing and pointer dereferencing are the same thing. If anything array indexing is slightly worse, because the index doesn’t carry a type.
Also you’re ignoring a whole host of other problems in C. Most notably unions.
People say that “you only need to learn pointers”, but that’s not a real thing you can do. It’s like saying it’s easy to write correct brainfuck because the language spec is so small. The exact opposite is true.
I’m not a fan of C++, but move semantics seem very clearly like a solution to a problem that C invented.
Though to be honest I could live with manual memory management. What I really don’t understand is how anyone can bear to use C after rewriting the same monomorphic collection type for the 20th time.
They are both doomed because neither is transformative enough to justify adoption. They are going to need to solve much harder problems to do that.
Take Rust as an example. It solved a problem that most people weren’t even paying attention to, because the accepted wisdom said it was impossible.
Mojo’s starting point is absurdly complex. Seems very obviously doomed to me.
Julia is a very clever design, but it still never felt that pleasant to use. I think it was held back by using llvm as a JIT, and by the single-minded focus on data science. Programming languages need to be more opportunistic than that to succeed, imo.
The github blurb says the language is comparable to general purpose languages like python and haskell.
Perhaps unintentionally, this seems to imply that the language can speed up literally any algorithm linearly with core count, which is impossible.
If it can automatically accelerate a program that has parallel data dependencies, that would also be a huge claim, but one that is at least theoretically possible.
I don’t want to be mean, but this is quite a lot of misinformation. For the benefit of other readers: this post makes a large number of specific claims very confidently, and almost all of them are wrong.
It takes time to make sense of this subject and it’s normal to get things wrong while learning. I just don’t want other people to go away thinking that closures have something to do with memory safety, etc.
Out of the ones you listed I’d suggest Julia or Clojure. They are simple and have interactive modes you can use to experiment easily.
Experienced programmers often undersell the value of interactive prompts because they don’t need them as much. They already have a detailed mental model of how most languages behave.
Another thing: although Julia and Clojure are simple, they are also quite obscure and have very experimental designs. Python might be a better choice. From a beginner’s perspective it’s very similar to Julia, but it’s vastly more popular and lots of people learn it as their first language.
Based on the languages you found, I’m guessing you were looking for something simple and elegant. I think Python fits this description too.
REBOL is one of my biggest blind spots in programming language familiarity. I remember there was another REBOL revival project called RED, which always boasted huge feature sets with small amounts of code, though I never got around to investigating those claims myself.
This project seems to aim to provide strong foundations for a more performant compiler, but still lacks the most powerful REBOL features. I wonder if anyone can summarise those features? In particular, is there anything fundamental that distinguishes REBOL from Lisp, Smalltalk, Ruby, etc?
If you want to do anything other than long-term blue sky VM research, don’t write your own VM. That’s my advice. Same goes for programming languages, game engines, etc.
Always do the unambitious thing that seems like it should take one weekend, and probably set aside a month for it >_>
Also admit to yourself that things like bloat and binary size are not real problems, unless they intrude on your daily workflow. They are just convenient distractions from harder tasks.
I say this as someone who is constantly failing to complete any projects for all these reasons.
But I don’t see why by that logic any turing-complete language wouldn’t be in the group “portable”, including any hardware-specific assembly. Because you can always implement a translator that has well defined-behaviour?
What matters is the practical reality. Generally, languages are not portable when they don’t have well-defined behaviour, and when this causes their implementations to differ.
And thanks to this low standard for portability, a lot of VMs and high level languages are portable until you get to the FFI.
e.g. is 6502 assembly now portable that C64 and NES emulators are commonplace?
I would say yes! It’s just that portability is not the only thing required to make a VM spec useful.
But if you lacked other options, you could theoretically build gcc for 6502 assembly once, and then use the same binary to bootstrap the gcc on lots of different platforms, specifically thanks to the proliferation of NES emulators.
This would also only work if there is a standard NES API available in all the emulators that is rich enough to back a portable libc implementation. I have no idea about that part.
Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.
This is front and centre on the homepage…
It makes assumptions about the native architecture for the sake of performance, but it’s still portable because you can implement all that behaviour in a VM if necessary. The important thing is that the behaviour is well defined.
It’s not perfect but I don’t think the situation is any worse than in Java, C#, Lua, etc. If your hardware has non-standard floats you’re going to have a bad time with any VM.
How did they ask all these random people and not bother to ask a single software engineer?
“Hi is this excuse real, or is it just a sign of an inappropriate relationship between the local council and a dodgy software company that pays more dividends than developers? Oh it’s the latter? Okay, thanks.”
My criticism is almost the opposite. I think LLVM is pretty easy and quite a good way of learning about low-level stuff in an incremental way. I just think it sucks to have it as a critical dependency in a mature product.
When I see a new language that is built around llvm I know the build time is going to be terrible.
As an aside, if I were teaching a compilers class I’d look quite seriously at wasm as a target. It’s pretty much the only target that is open-ended, high performance, and portable.
I’ve never used it though, so I don’t know how easy it would be for beginners to make sense of.
You can argue about productivity and “progress” all you like, but none of that will raise you back into my good opinion.
Why would you quote this and then immediately argue about productivity and progress?
In my opinion dependency injection solves a problem that doesn’t need to exist, and does it by adding even more obfuscation and complexity.
The problem is that the original gang of four design patterns had very little to say about managing effects. In old java code things like network and file IO often happen deep inside the object graph, hidden behind multiple impenetrable abstractions such that it’s impossible to run the logic without triggering the effect.
The wrong solution is to add even more obfuscation and abstraction, so that you can inject replacement classes deep inside the object graph where the effects happen. it solves the immediate problem of implementing tests, but makes everything else worse and more confusing.
The right solution is to surface all your effects at the top level of the call graph. The logic only generates data, and passes it back up to the top level of the program. The top level code then decides whether to feed this data into an effectful operation. Now all your code is easier to reason about, and in you can easily test the logic without triggering unwanted effects.
eh, i really did look for a joke. all i see is a “well actually” opinion that somebody here probably holds