Security and the much needed unification of servers
Today news sites repeated the monthly Microsoft execute says "Linux is insecure" articles. And while they are comparing apples with oranges (as Linux distributions ship with far more servers and network services than Microsoft offers), it's hard to deny the fact that Linux is also insecure. Essential and security critical packages like OpenSSH, LSH and OpenSSL had exploits in the last weeks and this should have convinced the last conservatives that it is not possible to write a complex server in C without having a remote exploit per year. All these exploits were caused by manual memory management that is relatively hard to avoid in C. But that's not even the point that I want to make, you can also have security problems in other languages. What free software (and also the proprietary competition mostly) lacks is a way to make securing your computer easy.
Let's assume a somewhat experienced user wants to find out which TCP/UDP ports are open, reconfigure all servers to accept only local IP addresses and otherwise shut the service down. In an ideal world the user could use some administration GUI to get a list of all ports that are open, with a user friendly service name (not the path of the binary!) for each port. Then the user right-clicks each of them and selects "Configure this service", a configuration GUI for the service appears and the user does the neccessary configuration.
The reality on a Linux system is different: first you do a "netstat -lp" to get the binary paths and PIDs of your daemons. The command to get the list of open ports is hard to find out unless you have some book about Linux administration. Maybe there is a GUI for this and I don't know it, but it is rather unlikely, as there is another problem on all Unix systems: complex system management functions are only available using the command line. There is no portable way to get the list of ports and the processes that use them, just as there is no good way of copying files without losing extended attributes or mounting a device. As a solution some apps try to call the command line tools, which results in bad progress information and horrible error messages (as you can see when you try to mount something in KDE and it goes wrong). Whatever, when you have the netstat list the real fun begins: trying to find out how to configure the service. Unfortunately Unix developers showed a lot of creativity in inventing new configuration systems for servers. Some, like CUPS, have a web administration interface that's fairly easy to use, when you are able to remember the port number. Others may even have a GUI administration tool, you only need to remember the name of the command to start it. But most of them did the worst imaginable, they invented their own free form configuration file. Each of them is completely different, the configuration files of Apache, Samba and OpenLDAP have nothing in common. And most of them share the property that they are so complex (or powerful, if you want) that it is extremely hard to write a graphical administration tools that can read a hand-written configuration file.
Right now the situation for servers on Linux/Unix is like X11 10 years ago. Almost each popular server is written using its own framework, uses its own configuration file format, has its own way of being configured. Configuration while the server is running is often not possible (and how should it be, if it is configured using a monolithic text file). There is no common way to get online statistics from the server. Many even come with their own user management and their own permission system. The only things that are standardized are starting and stopping them (/etc/init.d) and the syslog.
Servers need what free desktop applications needed before KDE came: a common framework. A common way of configuration, a common way of being configured using GUI tools, a unified view to monitor them... there are also options that many server have in common, like a IP to listen to or the allowed IP ranges. So users would have to configure them only once and maybe select the server that should use this setting. You could also have several 'levels' for network security. E.g. at home you may want to have your personal webserver running for family members, so you set it to 'medium'. But when you take your notebook to a cafe with a WLAN hotspot you switch the network security level to 'high' so other people in the same WLAN can not access your server. I think there are many other new possibilities to make your system easier to configure (and servers easier to write) by letting them use a common framework, just like KDE brought many advantages to X11 applications.
As I argued before I think that desktop applications and servers should share a framework; it doesnt make much sense to maintain two separate frameworks and a server also needs a GUI. And this also explains what's currently going on in kdenonbeta/kdeutil and kdenonbeta/knot.