this post was submitted on 19 Jun 2025
41 points (97.7% liked)

Programming

21093 readers
73 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

Summary by Dan Luu on the question about whether for statically typed languages, objective advantages (like having measurably fewer bugs, or solving problems in measurably less time) can be shown.

If I think about this, authors of statically typed languages in general at their beginning might not even have claimed that they have such advantages. Originally, the objective advantage was that for computers like a PDP11 - which had initially only 4 K of memory and a 16-bit adress space - was that something like C or Pascal compilers could run on them at all, and even later C programs were much faster than Lisp programs of that time. At that time, it was also considered an attribute of the programming language whether code was compiled to machine instructions or interpreted.

Todays, with JIT compilation like in Java and the best implementation of Common Lisp like SBCL being at a stone's throw of the performance of Java programs, this distinction is not so much relevant any more.

Further, opinions might have been biased by comparing C to memory-safe languages, in other words, when there were perceived actual productivity gains, the causes might have been confused.

The thing which seems more or less firm ground is that the less lines of code you need to write to cover a requirement, the fewer bugs it will have. So more concise/expressive languages do have an advantage.

There are people which have looked at all the program samples in the above linked benchmark game and have compared run-time performamce and size of the source code. This leads to interesting and sometimes really unintuitive insights - there are in fact large differences between code sizes for the same task between programming languages, and a couple of different languages like Scala, JavaScript, Racket(PLT Scheme) and Lua come out quite well for the ratio of size and performance.

But given all this, how can one assess productivity, or the time to get from definition of a task to a working program, at all?

And the same kind of questions arise for testing. Most people would agree nowadays that automated tests are worth their effort, that they improve quality / shorten the time to get something working / lead to fewer bugs. (A modern version of the Joel Test might have automated testing included, but, spoiler: >!Joel's list does not contain it.!<)

Testing in small units also interacts positively with a "pure", side-effect-free, or 'functional' programming style... with the caveat perhaps that this style might push complex I/O functions of a program to its periphery.

It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 4 days ago* (last edited 4 days ago) (7 children)

You can scientifically prove that one code snippet has fewer bugs than another though, and there's already mountains of evidence of static typing making code significantly less buggy on average.

Do you mean memory safety here? Because yes, for memory safety, this is proven. E.g. there are reports from Google that wide usage of memory-safe languages for new code reduces the number of bugs.

You can't scientifically prove that something is "better" than another thing because that's not a measurable metric.

Then, first, why don't the claims that statically compiled languages come with claims on measurable, objective benefits? If they are really significantly better it should be easy to come up with such measures?

And the second thing: We have at least one large-scale experiment, because Google introduced Go und used it widely in its own company to replace Python.

Now, it is clear that programs in Go run with higher performance than Python, no question.

But did this lead to productivity increases or better code because of Go being a strongly-statically typed language ? I have seen no such report - in spite of that they now have 16 years of experience with it.

(And just for fun, Python itself is memory safe and concurrency bugs in Pyhton code can't lead to undefined behaviour, like in C. Go is neither memory safe nor has it that level of concurrency safety: If you concurrently modify a hash table in two different threads, this will cause a crash.)

[–] [email protected] 3 points 4 days ago (3 children)

How is Go not memory safe? Having escape hatches does not count, all the safe languages have those.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago) (2 children)

As I already said: If you access and write to the same hash map from two different threads, this can cause a crash. And generally, if you concurrently access objects in Go, you need to use proper locking (or communication by channels), otherwise you will get race conditions, which can result in crashes. Ctrl-f "concurrency" here.This is different from, for example, Java or Python where fundamental objects always stay in a consistent state, by guarantee of the run time.

[–] [email protected] 3 points 4 days ago (1 children)

That is a consequence of having parallelism - all mainstream pre-Rust memory safe languages with parallelism suffer from this issue, they are still generally regarded as memory safe. I don't know where you got that Java does not have this issue, you need to know to use the parallelism-safe data types where necessary.

[–] [email protected] 2 points 4 days ago* (last edited 3 days ago)

In Java, this wouldn't cause a crash or an incorrect behaviour of the runtime. Java guarantees that. One still needs locking to keep grouped changes in sync and ordering of multiple operations consistent, but not like in Go, C, or C++.

Also, it is certainly possible to implement the "shared access xor mutating access" principle that Rust implements, in a dynamically typed language. Most likely this won't come with the performance guarantees of Rust, but, hey, Python is 50 times slower than C and it's widely used.

load more comments (3 replies)