Software

The program in a Unix-like system that allocates machine resources and talks to the hardware is called the “kernel”. Unix dates back to the mid-1960s when the Massachusetts Institute of Technology, AT&T Bell Labs, and General Electric were jointly developing an experimental time-sharing operating system called Multics.

Bell Labs slowly pulled out of the project to start it's own without the size and complexity of Multics.Researchers leave Multics – Ken Thompson, Dennis Ritchie, Doug McIlroy, Joe Ossanna and others to redo the work on a much smaller scale. In 1979, Dennis Ritchie described their vision for Unix:

"What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication."

C was developed by Dennis Ritchie between 1972 and 1973. In 1973, Version 4 Unix was rewritten in the higher-level language C. Migration to C allows portability of the software, requiring only a relatively small amount of machine-dependent code to be replaced when porting Unix to other computing platforms. C become the most widely used programming language with compilers for the majority of existing computer architectures and operating systems.

Bell Labs nearly destroyed the long-term viability of Unix with it's "proprietary" policies and business models. Bell Labs launches Unix System V in 1983 with free exchanging of source code difficult if not impossible.

GNU

Development of the GNU operating system was initiated by Richard Stallman while he worked at MIT Artificial Intelligence Laboratory. It was called the GNU Project, and was publicly announced on September 27, 1983, on the net.unix-wizards and net.usoft newsgroups by Stallman. Most important contribution from GNU project was the licenses that prevent abuses as Bell Labs have done to Unix community.

BSD

Since the newer commercial UNIX licensing terms were not as favorable for academic use as the older versions of Unix, the Berkeley researchers continued to develop BSD as an alternative to UNIX System III and V. Many contributions to Unix first appeared in BSD releases, notably the C shell with job control (modelled on ITS). Perhaps the most important aspect of the BSD development effort was the addition of TCP/IP network code to the mainstream Unix kernel.

Minix

In 1987, MINIX, a Unix-like system intended for academic use, was released by Andrew S. Tanenbaum to exemplify the principles conveyed in his textbook, Operating Systems: Design and Implementation. While source code for the system was available, modification and redistribution were restricted. In addition, MINIX's 16-bit design was not well adapted to the 32-bit features of the increasingly cheap and popular Intel 386 architecture for personal computers.

Linux

In the early nineties a commercial UNIX operating system for Intel 386 PCs was too expensive for private users. Lack of a widely adopted, free kernel made Torvalds start Linux. He has stated that if either the GNU Hurd or 386BSD kernels had been available at the time, he likely would not have written his own.


Why "free software"?

"Free software is software that respects your freedom and the social solidarity of your community." - Richard Stallman

:

Free Software and Hardware is important for individual and organizations in more ways than just technical and licensing details, Free Software is a matter of liberty, not price. Think of “free” as in “free speech”, not as in “free beer”. Free software is a question of the users' freedom to run, copy, distribute, study, change and improve the software. Some points comparing free software, proprietary or open-source;

  • Proprietary software means you are not free to use it as you wish. Free Software allows you to use it as you wish, study the program by accessing the sources, writing additional code, testing, modifying and distributing it. These things are prohibited for proprietary software and for some open source solutions.
  • Proprietary software will never be yours no matter how much you pay. Free Software uses copyright law to grant you freedoms that are usually reserved for the copyright owner. Proprietary software lock-in proprietary standards to ensure that their users will become returning customers. Free software works in open standards.
  • Proprietary software means, fundamentally, that you don't control what it does. The corporation or developer makes your computer/infrastructure obey them instead of you. With Free Software you own the data and infrastructure, avoiding the individual or organization and becoming a product that can be sold and snooped on.

For more information read "Free Software Is Even More Important Now" by Richard Stallman on gnu.org.

LibrePlanet | HackerSpaces | FSF | EFF


Operating Systems

Tanenbaum began a debate in 1992 on the Usenet discussion group comp.os.minix, arguing that microkernels are superior to monolithic kernels and therefore Linux was, even in 1992, obsolete. A monolithic kernel is an operating system architecture where the entire system functions, such as device drivers, protocol stacks and file systems is working in kernel space.

A microkernel architecture runs the entire system functions in user space. If the hardware provides multiple rings or CPU modes, the microkernel may be the only software executing at the most privileged level, which is generally referred to as supervisor or kernel mode.

Macintosh uses a hybrid kernel called XNU, combining the Mach kernel developed at Carnegie Mellon University with components from FreeBSD for writing drivers. The BSD subsystem part of the code is typically implemented as user space servers.

Most supported kernels are monolith due to early computer platforms, they are also more performant than microkernels. Most hardware keeps being only supported by monolith kernels.

Monolith

OpenBSD
Elegant unix kernel and user land tools developed together, project's emphasis on code quality.
Crux
GNU/Linux operating system with simple yet powerful ports and package system.

Microkernel

Minix
Based on a tiny microkernel running in kernel mode with the rest of the operating system running as a number of isolated, protected, processes in user mode. It runs on x86 and ARM CPUs, is compatible with NetBSD, and runs thousands of NetBSD packages.
The Hurd
GNU's own kernel started in 1990 and under active development. Hurd kernel runs on mach microkernel.

Desktop

Desktop oriented software such as; office, browser, multimedia, games are ported. Some of this software are common to other distributions and operating systems, this allows to make a smooth operating system migration.

Tiling window managers offer a better productivity and allow to take more advantage of the screen area. Tiling window managers require familiarizing with keys strokes, users who want the easiest transition should try KDE or Mate desktop environments.

A desktop switcher doesn't miss other well known systems, one of the factors is that the system can be highly personalized by the user. No other system on the market allow this level of customization due to its business models or software licenses.

To know more check Desktop page.



Network

Intranets is a shift from a hierarchical command-and-control organization to an information-based organization. This shift has implications for managerial responsibilities, communication and information flows, and workgroup structures.

An intranet aids in the management of internal information that may be interconnected with external services (transactions or operations conducted outside the intranet). The intranet allows for the instantaneous flow of internal information, vital information is simultaneously processed and matched with data flowing from external services, allowing for the efficient and effective integration of the organizational processes.

A server system is configured to provide services and or as a host of virtual machines. To host virtual machines qemu can be configured with bridge network to allow easy integration with current network. This services can be facing Internet or just Intranet.

To know more check Network page.



Server

To know more check Server page.