A r t i c l e s
Navigation

Note: This site is
a bit older, personal views
may have changed.

M a i n P a g e

D i r e c t o r y

Myths About CGI Scalability


Myth or Fact?

If 300 people happen to be at your site at the same time, there will be 300 copies of your program running simultaneously, right? And with Perl CGI, that means 300 copies of the Perl interpreter open, right? And without Perl CGI, just using plain old binary native executable CGI (which Powtils offers), it will still be 300 copies of that binary CGI, right?

Answer

It is only partly true. Windows NT for example will load the code segment of your exe or elf only once and share it even if multiple copies of the same exe or dll are loaded. Each user has it's own private data segment, but the code segment is shared! Unix is similar. Executables are like libraries in a way on both operating systems.

Trivia: In fact a Windows EXE can even be launched as a DLL:

  loadlibrary('program.exe')

There was a rumor spread around after the Windows 3.1 days which caused people to believe the code segment of multiple copies of the same exe running isn't shared. Other times, it isn't even because of a myth being spread.. it is simply due to lack of research/education about operating systems.

More Speed Built in to OS

Operating systems are also smarter than we think (unix, windows, etc), and we can't make assumptions or premature conclusions about CGI without knowing the deep inner workings of the operating system.

Have you ever launched a program such as Notepad on windows, or say a bigger program such as internet explorer the first time upon bootup? It takes a while to load the program (well notepad not so much, but still not super instant). Whereas if you immediately launch the program again, after you've already opened it once, it opens faster!

This is another operating system trick that acts like caching and speeds up CGI without you having to do any work.. it is just built in to the OS magically. You have to remember that most operating systems were designed and based around slow computers at that time (windows NT, unix).

Guess what happens with CGI programs? Many people are launching the same CGI program open again and again on busy websites - and the operating system loves when this happens. It is like a caching system without you having to code the caching yourself. It is much easier for the OS to repeatedly open the same cgi program over and over again than it is to open several different programs such as web browsers, excel, word, wordperfect, openoffice, etc.

This is all partly because modern operating systems were designed on the days when 100Mhz computers with 64MB or ram were popular.

There is actually sort of a built in "fastcgi" system in the operating system.

Now I'm not arguing that a fastcgi website couldn't be made to run faster than a regular cgi website in all cases or some cases or many cases - it could (with more memory leak hazards) - my point is more that people make false assumptions and assume that operating systems are stupid idiotic pieces of software that have no built in performance intelligence for cloned processes.

File System Caching, OS Caching

Actually, the same false assumptions are sometimes made with caching. Sometimes the operating system is doing caching behind your back - and programming the caching into your application may be battling what the operating system has already done for you!

File caching tricks and virtual memory are also used in operating systems behind our back. So some people start worrying about using files for storage because they will be slower since they are hard disk based.. but in fact many times the operating system uses file caching tricks to make files operated faster, especially when being accessed repeatedly.

Database queries are similar. Many people make assumptions and over-design their programs with their own caching code when the server has already done a lot of the work for them (or can be set up in the database configuration, or indexes and other settings can be tweaked, etc.).

With ISAPI, you have a DLL with the same code shared between all the clients and with a private data set for each client. It's actually similar to regular CGI in some ways, ironically. Of course, again, I'm not arguing that an ISAPI web application couldn't be used to build a fast website, and I'm not arguing that CGI is perfect.. I'm just trying to show you that CGI isn't all that bad. I've written other articles about how CGI offers free garbage collection since each process is killed, whereas in ISAPI/FastCGI you are more prone to big memory leaks. The fact that humans make mistakes, especially in languages like delphi/fpc/C++, makes CGI advantageous here.

All that stuff you've read about shared libraries.. did you know executables do similar? most people assume exe's are all launched with separate code segments. Exe's are in fact kind of like libraries themselves.

I really do need to scale?

Okay, so I've already ranted about and warned you about premature optimization in other areas of this wiki.. but let's pretend you've actually got a huge website like Ebay or OkCupid and you really do wish to start worrying a bit about performance of your CGI programs.

You can use the CGI helper technique: write helper program that is forked open once and restarted every so many weeks. The little CGI programs will connect to this helper executable (through inter process communication or sockets or piping) and this helper executable will handle Database Pooling, will have a state, and will talk to the web server. That way, all your code is constantly loaded into memory and your little cgi programs just talk to it. Or, you build apache modules or an apache handler. Or you keep several (about 10) cgi instances pegged open ahead of time. Or you build thread manager. Reusing most of your old code from 5 years ago back when you were using CGI. So how scalable is that for reusing your existing code?

Is this reinventing FastCGI? Well, this is better than FastCGI in some ways because this can be run on any server with CGI, not just fastcgi capable servers. One can launch a separate process using a cron job or a ShellExec to get the program running initially, and then have the program restarted every 1000 requests or every 25 days or something similar (to prevent memory leaking issues as after 1000 requests even a small memory leak will waste more and more memory). However, FastCGI can also just be used and you can reuse many your CGI algorithms and code in fastcgi.

Theory, or fact?

Is this really possible, or just a theory that one can have a CGI program talk to a program that is running for several days? I've done this myself on a typical CPanel account, so don't take my word for it, take my experience for it!

Instead of building a helper program to help me scale, I needed to run a few programs that I didn't want to launch with cron jobs. I didn't want them to run inside my web browser either.. I wanted them to run as true separate programs. Solution? My CGI program could launch the programs initially, as a bootstrap.. easily! Only a few lines of code. The CGI program would immediately exit, but leave external programs running after exiting.

The extrenal processes would report back to a log file saying how far they had gotten (I was grabbing over 20,000 websites and didn't want to sit there watching my web browser). So I wrote a CGI program to launch new processes in the background without it being a child of the main CGI. As soon as the CGI ran, the CGI launched the process, and then the CGI killed itself, but only itself.. not the processes it launched (separate parent processes were launched, not children).

The programs ran for a few hours until they were done. I was cooking, eating, cleaning, sleeping, while waiting - and I didn't have to sit there watching my web browser waiting for the 300 second CGI timeout.. because the processes were launched separately and there was no 300 second timeout limitation.

The same technique could be used to launch a CGI helper program that takes care of database pooling and that takes care of preserving some of the code state (keeping code libraries in memory, keeping database connections open, etc.). The helper program would run for several days. For robustness, you could have 3 helper programs.. in case one of the helper program fails. Kind of like rocket ship programming where they have dummy backups in case the rocket ship software fails and has a fatal exception.

But still, stop worrying so much

I've warned you, the operating system is pretty smart - and you might not need to do any of these tricks in 98 percent of your applications. Don't prematurely optimize.. but know that optimization is available to you! More is available to you with CGI than with Ruby, Python, or PHP! Don't be afraid, forget the FUD about CGI. In Ruby you have to write code in C to speed things up. With powerful web utilities you don't mix and match ..you just use one language. (Actually you could launch a PHP interpreter or PascalScript interpreter from within a Powtils program if you really wanted, but I'm not sure what purpose that would serve).

Knowing that optimization is available to you, is healthier than trying to program the optimization into the code from day one (such as choosing C for your entire website on day one, since C is fast). At the same time, programming in Ruby from day one and saying you'll optimize the Ruby later isn't so smart.. but saying that your current Powtils program is fast but not super fast.. is smart.. because you CAN optimize it later, with more Pascal code and helper Pascal programs.. whereas in Ruby you can't really optimize Ruby significantly.. you more have to resort to using C. That's not so nice, because you have to switch and intermingle two languages.. and most people just end up throwing more money at hardware and using more and more ruby.

So my point is not that optimization is evil.. I think that knowing you have options, and knowing you have potential for optimization.. is much healthier than not having any potential for optimization. Some algorithms, for example, need to be optimized from day 1.. and trying to optimize them 2 years down the road won't work - they need to be rewritten from scratch. Well that isn't the case with CGI - CGI is scalable, and it is fast from day one, and can be even faster (but with risk of more memory leaks just like fastcgi) with the cgi Helper Program technique.

Did I mention that Google use C++ for their template system that displays the google search pages? They don't use some fancy JSP/ASP/PERL/PHP solution. You can bet that google probably uses some of the "helper program techniques" to make the google experience faster than fast! But are you google? Remember, you may not need to go faster than fast.. but the fact that going faster than fast is available to you, is a bonus over all the other technologies. You don't even have to use fastcgi to go faster than fast.. you can use your own helper program technique. If you want to use fastcgi at some point, at least all your modern Pascal algorithms are available to you in fastcgi too.

If you Decide FastCGI is For You

The fastcgi program layout needs to be changed compared to a regular CGI program.. but your data structures and algorithms that you have written for the past 5 or 10 years in fpc/delphi can obviously still be used inside the program. That is easier than say choosing Ruby or PHP, where none of your fpc/delphi code can be used in Ruby or PHP.

Some Advantages Over Ruby/Python

The only way of optimizing Ruby is not to use Ruby.. use Cee, which means mixing two languages and a whole bunch of messes. Same idea with Python.. extend Python with Cee and make fast code in Cee. Why is this an advantage, over just using something like Powtils where you use one language to do the logic, and use templates or powtils widgets for design?
References:
"Multiple instances of the same application or .dll use the same code segment, but each has its own data segment."
--http://msdn2.microsoft.com/en-us/library/ms633574.aspx


About
This site is about programming and other things.
_ _ _