Jobs
Jobs are operations that are started automatically at a certain time according to a specified schedule. For example mailing daily reports to selected users, sending invoices or database integrity checks.
Defining job operations
Jobs may be created and edited on develop and production environments. To create and edit jobs, application administration rights are needed or the user must specific job editing rights.
Navigate to "Blueprint -> Jobs", where a list of jobs related to the application can be found. Click the "New Job" button to create a new job. To define what the job does, an existing class action can be chosen or a new action can be created specifically for this job. To constrain the records the job will operate on, a selection query can be defined in the application.
Please be aware that if a selection query isn't specified, the action will run on all records of the class.
Another way to define what the job does is by entering an url to a script. The script runs on it's own, it doesn't use anything else like the selection query. This should all be handled by the script itself.
To schedule imports, either select an existing import definition from the "Import" drop down menu. More information on importing files can be found here.
It is also possible to send a report after completing a job. Some actions automatically generate reports, or a query can be selected at the "Reporting query" drop down menu. Set the e-mail address to send the report to in the "Report" section. Fill in "Email address for alerts" to receive a notification email when the job fails to run successfully.
Scheduling jobs
In the job detail screen, jobs can be scheduled to automatically run at the "Schedule" section.
Note that job are only scheduled and run on the production environment, however a job can be run or tested manually on any environment.
Job execution times can be defined at the level of months, days, hours and minutes. For example checking the friday box and entering 22 in the hours field, will make job A run every friday evening at 22 hours. A job can also be configured to run after another job for example job B runs after job A and job C runs after B. Job A, B and C are then part of a 'job chain'.
By default, a job that is part of a chain, will be run regardless of the success of the job it is suppose to start after. This default can be overriden by checking the field 'Do not execute if "Start after job" fails'. Finally, a job chain will only be completely run when executed automatically. A manual run will only run that specific job.
It is also possible to run a job for one time only. In that case, the start at date is set. This is mainly used when scheduling a mail batch. In that case the system creates a single run job automatically. Single run job can not be part of a job chain.
When a job is saved (and property "deactivated" is false), its schedule is effective immediately. After saving a job, a message will show the scheduling result. This information is also shown in the job status field. The status field can have 4 different values:
Job schedule status values
- Scheduled
- Not scheduled
- Error
- Executed one time
In case the job status is 'not scheduled' the defintion error may contain more details on why. If the job is succesfully scheduled the 'next scheduled run' field will show the next date the job will be automatically executed. Again, be aware that this information is only shown on (production) environments where jobs are allowed to run automatically.
Synchronizing a job to production from for example the develop server will also schedule these jobs immediately.
An overview of the scheduled job or jobs can be shown by clicking on the 'Show job schedule' button either at the job details or on the search screen. This will show all scheduled runs of future scheduled job(s).
If it is necessary to temporarily stop a job from being executed automatically, check 'deactivated' and save. This will remove the job from the schedule but will retain all information entered, including the scheduling.
It is possible that the scheduling of the job collides with it's execution. This means the job is still running when it's supposed to run again. All scheduled new job run or runs will be skipped, when the job is still running. In most cases the user will be notified when such a collision occurs so they can alter the schedule or improve the performance of the job.
Job execution and emergency termination
Whenever a job is executed, automatically or manually, the job will be shown to be running in the job search screen. Please note that the screen must be refreshed to see if the job is still running or not. If it is necessary to stop the execution of the job, go to 'Monitor -> Background processes'. Here it is possible to terminate running jobs and any other running background process of the application.
When a job is run an execution log is maintained. This can be found in the backstage under "History -> Executed jobs". This will show the start and end time of a job run, it's result and possibly a report. The result and date of the last run are also shown in the job definition details as well as it's average duration. This can help in monitoring the success of jobs.
Effects of job definition mutations on running jobs
Changing the definition of a job can alter the way it operates, for example changing the action it executes or what query is used as a selection query AND it can change the scheduling. Modifying the job definition while it's running will not change it's operation or it's scheduling. Jobs that are scheduled to start after the running job WILL be affected by a change in how they operate (before they are started) but NOT in the way they were scheduled at the time when the job chain was started.
Note that changes in (Velocity) scripts used by the job, CAN have immediate effects even on running jobs.
Jobs in clustered environments
If an engine is part of a cluster, jobs are only scheduled on the current cluster head. When switching, all jobs are scheduled on the new cluster head automatically. Should a job be running at the time of the switch, this job and an other jobs that are part of it's chain, will complete on the original slave engine. Note that in a cluster environment is is theoretically possible that the same job is running simultaneously because the cluster engine does not presently share information on running background processes.