For some years an idea had been nagging me. As a programmer, I could see the issues that were already beginning to plague the software industry.
- Software was becoming more and more complex, requiring the cooperation of many companies and individuals.
- Internationalisation, as technology spread into the lesser developed countries, a need for multiple language versions went with it.
- CPUs, though still keeping up with Moore’s law, were only achieving this with multiple CPU cores.
- Hardware architectures and code portability.
- The declining ability to customise entire systems
- Software reliability and testing
Let us take these points individually and examine them further.
As computers do more, we want them to do even more, but the amount of code that any individual or company can write is limited. Code from different sources is required to build such complex programs. This requires a certain level of cooperation, something that business in general has a problem with, let alone software houses.
Every software house wants to define its own terms and standards and push its presence to the user. Programmers are a bit wild, they will change their minds on the slightest whim and disappear over the horizon on their white chargers, before fully assessing consequences.
We are now seeing those consequences. The update race is hotting up. Company A writes a program that depends on company B’s library, which in turn depends on company C’s library, while company D uses company A’s program as part of a complex specialist suite. Company A finds a problem with library B. Company B defers the problem to company C. Company C has not only fixed the bug, but altered its library in such a manner that company B must re-write some of its code… The cascade of updates rattles down the development chain upsetting everything in its wake. The internet has made this worse. The fact that software houses can ship untested, shoddy code, knowing that they can simply supply an online update, almost encourages poor quality software.
Continuing with this scenario, company E also writes a program that depends on company B’s library, but they no longer support this program. Now we have a situation where upgrading library B breaks program E.
It has all gotten very messy. An escalating race condition that only seems to be getting worse. I wonder how much productivity is being lost due to updates.
When a software house moves into the international arena, it can be a big problem. Code is often written with human language embedded. There are two options: create multiple copies of the software; or re-write the code in order to incorporate internationalisation libraries. The first is a quick short term solution, that means modification, recompilation and redistribution of the program in every language. The second, is more long term, but it requires substantial investment. Additionally, every new language added, requires access to translators, and if a translator for a particular language is not economically viable, it doesn’t happen.
CPUs are becoming incredibly powerful. At the time of writing AMD’s Ryzen Threadripper has 64 hyperthreading cores capable of 128 simultaneous* processes. Traditional programming languages are not really designed for writing multi-threaded software. It’s part of the nature of language. Language is a stream, and computer languages are no different. Writing multi-threaded code is tricky. Every thread must be controlled from one central program thread, must be well behaved and be able to operate independently of or cooperatively with other threads. Most software houses will avoid multi-threading, unless their programs are sufficiently CPU intensive (ie. slow) to have their customers demand it.
4. Hardware Architecture
CPUs differ. Even CPUs from the same manufacturer will have different capabilities, and will therefore require different code. It is not possible to execute ARM code on an Intel CPU, nor is it possible to execute Intel Core 2 specific code on an Intel 80486. ARM and Intel architectures are very different and there is no solution but to compile multiple versions of any software. However, Intel architecture opens up a different problem. In order to produce software for both these architectures, the software house must make a choice. Either they should compile the code for the lowest common denominator (ie. 486) or the program must be able to detect CPU enhancements and run different subroutines, depending upon the result. The first option, simply avoids using any CPU enhancements at the cost of performance. The second takes a small performance hit performing the required tests, but allows CPU enhancements to be utilised if present. This does, however mean that two versions of each function must exist in the program, increasing program size. RAM and code that will never be used when the program is being executed.
Alternatively, the software house could produce multiple versions of their program or library. One to support every CPU extension: FPU; MMX; SSE; SSE2; SSE3; SSE4; AVX… This would produce the most efficient code, but only the ultimate Linux geeks will be compiling code to so perfectly match their architecture. No commercial software house would be prepared to do such a thing unless their programs were aimed at very specific hardware.
5. System Customisation
Setting up computers for a specific task is becoming more difficult. Have you ever been to an airport or railway station and seen an information screen with a Windows dialogue box waiting for the nonexistent user to click [OK]? A lot of electronic cash registers are still using Windows XP, or even Windows 95, simply because it has become almost impossible to make the cash register software just run on boot, without Windows wanting to do something else instead. It has become nearly impossible to create a computer to just perform one task, or to intercept errors and deal with them automatically.
Why can’t we make computers do what we want? Why can’t we create complex systems that allow us to integrate code from multiple software houses? Systems that automatically integrate the accounts software, word processing software, communications software, with custom user interfaces and automatic task assignment? After all, it’s your computer.
6. Software Quality
In the early days of software development, software was published on physical media, tapes, floppies, CDs and DVDs. The software had to work. Huge effort was put into software quality control and beta-testing. The failure to spot a bug could sound the death knell of the software house. Issuing bug-fixes and updates was an expensive process, so it was imperative that the software be right when shipped.
Now, with the internet, updates and bug-fixes can happen almost instantly, and more importantly for the developer… free. Developers can now dispense with quality control and push the beta-testing directly on to paying customers, knowing full well that should there be further bugs, sending out a fix will cost them nothing. Have you ever considered why with Windows 10, it became impossible to turn off updates? Only corporate users with the Enterprise edition have update control. Why? … because Home and Premium users are the beta-testers. Only the corporate editions get the luxury of maintaining stable systems whilst home users have to suffer the failures, sometimes to the point where their computers fail to boot.
Software quality is becoming an issue. With the race condition that exists between software developers and the often sloppy code that is published, it is the users that ultimately suffer. Yet, how many users actually ever use the new features that have been added? How many users actually notice? And then, there’s security fixes. One must have security updates. But why? If the software were written more diligently with proper beta-testing, there would be far less security issues in the first place.
These problems are the consequences of historical artefacts from the primitive beginnings of computer architecture. Maybe it’s time to reevaluate the way software is built?