o11c

joined 2 years ago
[–] [email protected] 4 points 2 years ago (1 children)

I haven't managed to break into the JS-adjacent ecosystem, but tooling around Typescript is definitely a major part of the problem:

  • following a basic tutorial somehow ended up spending multiple seconds just to transpile and run "Hello, World!".
  • there are at least 3 different ways of specifying the files and settings you want to use, and some of them will cause others to be ignored entirely, even though it looks like they should be used.
  • embracing duck typing means many common type errors simply cannot be caught. Also that means dynamic type checks are impossible, even though JS itself supports them (admittedly with oddities, e.g. with string vs String).
  • there are at least 3 incompatible ways to define and use a "module", and it's not clear what's actually useful or intended to be used, or what the outputs are supposed to be for different environments.

At this point I'm seriously considering writing my own sanelanguage-to-JS transpiler or using some other one (maybe Haxe? but I'm not sure its object model allows full performance tweaking), because I've written literally dozens of other languages without this kind of pain.

WASM has its own problems (we shouldn't be quick to call asm.js obsolete ... also, C's object model is not what people think it is) but that's another story.


At this point, I'd be happy with some basic code reuse. Have a "generalized fibonacci" module taking 3 inputs, and call it 3 ways: from a web browser on the client side, as a web browser request to server (which is running nodejs), or as a nodejs command-line program. Transpiling one of the callers should not force the others to be transpiled, but if multiple of the callers need to be transpiled at once, it should not typecheck the library internals multiple times. I should also be able to choose whether to produce a "dynamic" library (which can be recompiled later without recompiling the dependencies) or a "static" one (only output a single merged file), and whether to minify.

I'm not sure the TS ecosystem is competent enough to deal with this.

[–] [email protected] 2 points 2 years ago

If there's a .pc file shipped, pkg-config can simplify your life by figuring out the flags for you.

[–] [email protected] 2 points 2 years ago

The problem is that the application developer usually thinks they know everything about what they want from their dependencies, but they actually don't.

[–] [email protected] 1 points 2 years ago (4 children)

The problem is that GLIBC is the only serious attempt at a libc on Linux. The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX. While I can no longer name similar catastrophes, that history gives me little confidence.

There are some lovely technical things in MUSL, but a GLIBC alternative it really is not.

[–] [email protected] 1 points 2 years ago (2 children)

That's misleading though, since it only cares about one side, and ignores e.g. the much faster development speed that dynamic linking can provide.

[–] [email protected] 3 points 2 years ago

Only if the library is completely shitty and breaks between minor versions.

If the library is that bad, it's a strong sign you should avoid it entirely since it can't be relied on to do its job.

[–] [email protected] 6 points 2 years ago (7 children)

Some languages don't even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that "static linking" has shades of meaning: it applies to "link multiple objects into a binary", but often that it excluded from the discussion in favor of just "use a .a instead of a .so".

Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don't care about security so I'm talking about annoyance instead. Some realistic numbers here: dynamic linking might be "rebuild in 0.3 seconds" vs static linking "rebuild in 3 seconds" vs no linking "rebuild in 30 seconds".

Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there's nothing wrong with RPATH if you're not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).

Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of "single source of truth". If you actually read the man pages for the tools you're using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.

Again, keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

The big question these days should not be "static or dynamic linking" but "dynamic linking with or without semantic interposition?" Apple's broken "two level namespaces" is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden.

[–] [email protected] 10 points 2 years ago

As a practical matter it is likely to break somebody's unit tests.

If there's an alternative approach that you want people to use in their unit tests, go ahead and break it. If there isn't, but you're only doing such breakage rarely and it's reasonable for their unit tests to be updated in a way that works with both versions of your library, do it cautiously. Otherwise, only do it if you own the universe and you hate future debuggers.

[–] [email protected] 4 points 2 years ago

The thing is - I have probably seen hundreds of projects that use tabs for indentation ... and I've never seen a single one without tab errors. And that ignoring e.g. the fact that tabs break diffs or who knows how many other things.

Using spaces doesn't automatically mean a lack of errors but it's clearly easy enough that it's commonly achieved. The most common argument against spaces seems to boil down to "my editor inserts hard tabs and I don't know how to configure it".

[–] [email protected] 2 points 2 years ago

The problem is that what everybody really wants is parameterization, not concatenation. But most solutions therefor are flaky even if they exist.

[–] [email protected] 3 points 2 years ago

It's solving (and facing) some very interesting problems at a technical level ...

but I can't get over the dumb decision for how IO is done. It's $CURRENTYEAR; we have global constructors even if your platform really needs them (hint: it probably doesn't).

[–] [email protected] 16 points 2 years ago

Stop reinventing the wheel.

Major translation systems like gettext (especially the GNU variant) have decades of tooling built up for "merging" and all sorts of other operations.

Even if you don't want to use their binary format at runtime, their tooling is still worth it.

view more: ‹ prev next ›