Article Publishing
~ multiple executable gui app ipc notes
This article will be turned into a PDF later. For now just notes:

A multiple exe/elf/a.out GUI app is possible, that spawns off executables (or just one extra executable spawned off) and then communicates over IPC to send data back to the GUI.

Most GUI apps do not need a complex architecture in order for GUI widgets to function, despite GUI's being complex underneath. A button click event is extremely simple, and can easily be linked to a procedure in another executable, not in the same process. A message is sent to the second process, completely on its own but has an IPC mechanism (i.e. SimpleIPC) to receive a message from the GUI to launch certain code. When this code is launched it runs in its own process, so the GUI remains responsive and does not lock up. When the processing (whatever code is being run) is finished in the other executable, it sends a message back to the GUI app. For example the GUI may have a memo or textarea field that needs status updates.

Key issues: the GUI needs to be synchronized so that the gui exe/elf/a.out and the processing/other executable do not write to the memo at the same time. For this, the executable can do a lot of non GUI work, which often needs to be done, and only write GUI related modification code when necessary, and the receiving executable can process the GUI widget modification messages in an event loop such as Application OnIdle, or in another constructed event loop which will not allow for example a memo to have text written two it by two executables at once.

Advantages of this architecture: less dangerous use of threads, as separate executables are used to do processing work, number crunching, data processing. There is no need to use pipes at a low level, simple messaging is used with a higher level IPC mechanism such as SimpleIPC dll. Often a message sent back to the GUI is simple a few integers and a string, or just an integer, or just integers, or just a string. Example: change the position of a widget with x and y coordinates, and update a memo status with a string, saying that processing is completed.. then send another string to the location of the output file, and update the label widget which shows the location of the file. All these messages sent back to the GUI are often minimal, whereas the actual processing code that does other work contains the heavy ammount of code for the processing of data and other.

Using this technique of lightweight small messages being sent back to the GUI with a processing or worker executable doing grunt work, multiple cpu's can be utilized. Multiple processing executables can be used, or just a single processing executable can be used with another gui executable paired to it for two total executables. Two total executables, one for the GUI, one for the processing, may be enough for some applications, but this is not a limitation: any number of executables can be used, so as long as the GUI receives back data in such a way that multiple executables do not overwrite the GUI at the same time.

In fact ot was likely the intention of the Unix operating system: to make processes do the work. But yet it never really worked the way people wished it would work: handling multiple processes is a difficult task as people must write code to parse pipes or communicate with pipes, which can prove to be tedious and difficult. So then people end up just using threads, or struggling with pipes and wishing there was an easier way. Unix was also not originally, AFAIK designed with a GUI in mind. Using a high level simple ipc mechanism to send messages back to the gui, and vice versa to the worker executable, one can fulfill the unix dream of using processes to do the work instead of less safe and more complex threads. A GUI or even command line program, spawns of one or more extra executables, and then the main program (spawner) receives messages back to update it's GUI (or console printlns for status).

Imagine having a GUI with four buttons on it, and you click each button one after the other. Normally the application would lock up if one button fires off a procedure or function which does hefty drawn out work, such as processing hundreds of files. So people will end up using threads to make the application responsive. However if the button or widget is just firing off some work, why not put that work into a separate executable? Answer: because communicating between two executables is difficult and obnoxious, and there are no easy ways to do it - so just use threads instead inside one monolothic mammoth large application? This violates the unix spirit of having small programs do work. Even on Windows (it does not have to be Unix) using separate processes to spawn off different work is a good idea... in fact that is what the operating system itself does. The task bar shows a bunch of processes. Why can't an application have the same benefit of the OS itself being able to utilize multiple processes? The answer again is lack of simple IPC mechanisms and a computing science architecture or description on how to solve the problem without developer headaches.

Indeed an operating system does not do much communication between orthogonal processes: firefox rarely needs to communicate with MS Word or OpenOffice. VLC video player rarely (or never) needs to communicate with OpenOffice Calc, or MS Excel. Applications that have a worker executable doing work, will indeed need to communicate much more often back to the GUI, for status updates, and data to be displayed in the GUI widgets, such as edit boxes, memos, checkboxes, radio buttons, grids, etc. So one of the keys to having a multiple process GUI architecure is having a simple but powerful communication mechanism to send messages or statuses and data back to the gui, and vice versa to the worker/processing executable.

Examples to show with this article: a GUI app that launches GoLang code and C code and FPC code in separate executables, with a main GUI application executable. The processing executables (in GoLang, C, and FPC) will communciate back with the GUI, and the GUI will communciate with the executables. All in a simple and not overly complex manner, with little to no low level code needed such as sockets. Sockets are indeed a brilliant invention and piece of work from unix, but a tad too low level for much work. The underlying high level communication system between the executables could indeed use sockets underneath, to actually do the work, but the point is to keep the communication mechanism simple to use and not as low level for the developer who just needs to get work done (create his GUI to do work in other executables), without the developer having to know intricate details of sockets/pipes/file sharing/memory sharing/operating system calls, etc.

Imagine a GUI application where when you hit the compile button, it creates a GUI app that has a plain C executable to do some work, and a GoLang executable to do other work, with a GUI written in Lazarus, or Visual Studio.. Or imagine a project that just uses a bunch of GoLang executables to do data processing, but since GoLang is not so good at GUI, the GUI is written in Lazarus or Visual Studio, or some other GUI tool - but you can still use GoLang or any other language for your data processing. Normally in this case to accomplish this task one would have to use pipes or sockets, or a similar mechanism. What if a developer environment (IDE) allowed you to compile this project in one step so that you had a GUI app which called GoLang code in a separate exe, but you almost didn't even need to know it was in a separate executable and it all appeared to be part of the same application? Again: with Unix this was one of the designs: one program could spawn off multiple processes/executables and it would appear that you were running just one program even though it might be several combined. But where is this for GUI's? This really only excelled in unix console mode applications or piped command line sequences such as sending the output of one program to another console program. Where is this architecture for GUI? It's virtually non existent.

Another advantage of having multiple executables, some for processing, and on main one for the GUI, is that many languages today do not have GUI libraries. GoLang suffers from the fact that it is really just a command line and server tool and lacks something like Visual Studio or Lazarus for GUI. What if your current IDE could simply launch GoLang code as if it was part of the same application, but in a separate process (almost unknown to you). This is actually not so difficult with a simpleipc dll that handles sending messages from one process to the other.

Many tools are single language tools. You start a project in C++ and you don't use much else other than C++ and C. Why couldn't a C++ GUI application start off worker golang processing executables with only the minimum C++ used for the GUI layer? Or if not C++ GUI then any other tool that creates a GUI, such as Lazarus, or wxwidgets, or even HTML 5. Imagine an HTML 5 app that can connect to GoLang executables, or fpc executables, or plain C executables, to do the work - then GUI updates are sent back to html 5 widgets. Memos are filled, text areas are updated, edit boxes are updated, but the GoLang/C/fpc/Rust/Nimrod code does the data processing and work, while the HTML 5, and possible a little javascript is the GUI.

Of course another advantage of having separate processes do work is that they are safer than threads in that if a process fails or has errors, the GUI app will not crash or be brought down, as the code resides in a separate executable. Again, the unix dream - which can not only be lived on unix but also on Windows.

Examples to build to demonstrate this concept of computing science: golang worker executables, fpc worker/processing executables, and plain C worker/processing executables, hooked up to a standard GUI tool such as Lazarus/visualstudio/delphi/powerbasic/HTML5 CEF/other. As I create demos they will be linked to here.
Copyright © War Strategists, M.G. Consequences 2009-2017    Help! Edit Page