#182070 - 2007-10-30 02:26 AM
Re: multi threading
[Re: Shawn]
|
Glenn Barnas
KiX Supporter
   
Registered: 2003-01-28
Posts: 4402
Loc: New Jersey
|
I've always written these scripts such that the children are independent of the parent. They carry on even if the parent dies. I don't think it's like Unix, where the child processes don't survive the parent unless started with Exec.
In my example, the children will collect the data even if the parent dies. The next cycle, the parent will most likely simply clean up after the prior process if anything remained. Of course, the new parent could also assume that the prior parent didn't succeed, and could process the files present before creating its own offspring.
Since we don't have the luxury of signals and IPC (that I know of, at least!) within Kix, I just write simple processes. Most are non-critical, so a parent dying would result in no data collection that day - not a crisis. My primary use of this technology is the nightly query of about 300 servers - we gather disk space info, summarize event log warnings/errors, verify scheduled tasks, and a couple of other "server health" items. All gets collected with one collection script per server, run 50 at a time, scheduled similarly to what I illustrated above. 90% of the collection runs in 15 minutes, but we have a few ancient systems that take about 30-40 minutes to complete.
My Kix-based software deployment system works in a similar fashion - a KF GUI is used to stage a web content release - moving files from the dev fileserver to a deployment server (code freeze). Another person runs the KF GUI that initiates the deployment - it takes the GUI input, generates a (temp) INI file with the instructions, and then drops the INII file into a queue folder. The person doing the deploy has A-D rights to run the deploy to QA or PROD, but no logon or share access to the web servers.
The service running on the deploy server sees the new INI file in the queue, determines which environment it is being deployed to, and creates a scheduled event (task, but no trigger) using the appropriate credentials. This is a modification of the "run" concept above, since we need to use an alternate ID. (tried RunNas, but no joy) The service then triggers the task with a RunNow action, starting the actual deploy process. I can kick off about 3 deployments per minute - different releases, different "products", and different environments.
The Staging process is self-contained, but the Deploy GUI, scheduling service, and deploy-job processor all communicate with each other via the job INI file - even to the point of passing interactive status messages back to the GUI. The end-result is then passed back to a status file that the Staging tool can interrogate, displaying the status (pending, queued, success, or failure) of the staged job. As complex as this is, it's tolerant of any component failure - if the service fails, the jobs hold in the queue. If the deploy process fails, the job can be requeued. If the deployment itself fails, the modified files are restored from the ZIP backup.
I'd really like to play with some type of IPC, so two independent processes could more readily commnuicate - in this case, the Deploy Console and the deploy service, then the deploy service and the deploy-job processor. KF sockets, maybe?
G-
PS - aren't ya glad you asked a simple question?
_________________________
Actually I am a Rocket Scientist!
|
|
Top
|
|
|
|
Moderator: Glenn Barnas, NTDOC, Arend_, Jochen, Radimus, Allen, ShaneEP, Ruud van Velsen, Mart
|
0 registered
and 611 anonymous users online.
|
|
|