You define a class that implements the job interface. This class only indicates what type of tasks the job needs to complete. In addition, Quartz also needs to know the properties contained in the job instance; This will be done by the JobDetail class.
Let's first look at the nature of the Job and the lifetime of the Job instance. Let's go back to the code snippet in Lesson 1:
// define the job and tie it to our HelloJob class JobDetail job = newJob(HelloJob.class) .withIdentity("myJob", "group1") // name "myJob", group "group1" .build(); // Trigger the job to run now, and then every 40 seconds Trigger trigger = newTrigger() .withIdentity("myTrigger", "group1") .startNow() .withSchedule(simpleSchedule() .withIntervalInSeconds(40) .repeatForever()) .build(); // Tell quartz to schedule the job using our trigger sched.scheduleJob(job, trigger);
Now consider the job class "HelloJob" defined as follows:
public class HelloJob implements Job { public HelloJob() { } public void execute(JobExecutionContext context) throws JobExecutionException { System.err.println("Hello! HelloJob is executing."); } }
You can see that we passed a JobDetail instance to the scheduler. Because we passed the class name of the job to be executed to JobDetail when creating JobDetail, the scheduler knows what type of job to execute; Every time the scheduler executes a job, a new instance of this class will be created before calling its execute(...) method; After execution, the reference to the instance will be discarded and the instance will be garbage collected; One consequence of this execution strategy is that the job must have a parameterless constructor (when using the default JobFactory); Another consequence is that stateful data attributes should not be defined in the job class, because the values of these attributes will not be retained in multiple executions of the job.
So how to add properties or configuration to a job instance? How to track the status of a job during multiple job executions? The answer is: JobDataMap, part of the JobDetail object.
JobDataMap
JobDataMap can contain unlimited (serialized) data objects, and the data in it can be used when the job instance is executed; JobDataMap is an implementation of Java Map interface, which adds some methods to facilitate access to basic types of data.
Before adding a job to the scheduler, you can put the data into the JobDataMap when building the JobDetail, as shown in the following example:
// define the job and tie it to our DumbJob class JobDetail job = newJob(DumbJob.class) .withIdentity("myJob", "group1") // name "myJob", group "group1" .usingJobData("jobSays", "Hello World!") .usingJobData("myFloatValue", 3.141f) .build();
During job execution, you can retrieve data from JobDataMap, as shown in the following example:
public class DumbJob implements Job { public DumbJob() { } public void execute(JobExecutionContext context) throws JobExecutionException { JobKey key = context.getJobDetail().getKey(); JobDataMap dataMap = context.getJobDetail().getJobDataMap(); String jobSays = dataMap.getString("jobSays"); float myFloatValue = dataMap.getFloat("myFloatValue"); System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue); } }
If you add a set method to the key of the data stored in the JobDataMap in the job class (for example, in the above example, add a setJobSays(String val) method), the default JobFactory implementation of Quartz will automatically call these set methods when the job is instantiated, so you don't need to explicitly fetch data from the map in the execute() method.
During Job execution, the JobDataMap in JobExecutionContext provides us with a lot of convenience. It is the union of JobDataMap in JobDetail and JobDataMap in Trigger, but if the same data exists, the latter will overwrite the former value.
The following example obtains the merged JobDataMap from the JobExecutionContext during job execution:
public class DumbJob implements Job { public DumbJob() { } public void execute(JobExecutionContext context) throws JobExecutionException { JobKey key = context.getJobDetail().getKey(); JobDataMap dataMap = context.getMergedJobDataMap(); // Note the difference from the previous example String jobSays = dataMap.getString("jobSays"); float myFloatValue = dataMap.getFloat("myFloatValue"); ArrayList state = (ArrayList)dataMap.get("myStateData"); state.add(new Date()); System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue); } }
If you want to use JobFactory to realize the automatic "injection" of data, the example code is:
public class DumbJob implements Job { String jobSays; float myFloatValue; ArrayList state; public DumbJob() { } public void execute(JobExecutionContext context) throws JobExecutionException { JobKey key = context.getJobDetail().getKey(); JobDataMap dataMap = context.getMergedJobDataMap(); // Note the difference from the previous example state.add(new Date()); System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue); } public void setJobSays(String jobSays) { this.jobSays = jobSays; } public void setMyFloatValue(float myFloatValue) { myFloatValue = myFloatValue; } public void setState(ArrayList state) { state = state; } }
Job instance
You can create only one job class, and then create multiple JobDetail instances associated with the job. Each instance has its own property set and JobDataMap. Finally, add all instances to the scheduler.
For example, you create a class "SalesReportJob" that implements the job interface. The job requires a parameter (passed in through jobdatamap) to represent the name of the salesperson responsible for the sales report. Therefore, you can create multiple instances of the job (JobDetail), such as "salesreportfor joe" and "SalesReportForMike", and transfer "job" and "mike" to the corresponding job instance as the data of jobdatamap.
When a trigger is triggered, the associated JobDetail instance will be loaded, and the job class referenced by JobDetail will be initialized through the JobFactory configured on the Scheduler. The default JobFactory implementation is to call the newInstance() method of the job class, and then try to call the setter method of the key in the JobDataMap. You can also create your own JobFactory implementation, such as allowing your IOC or DI container to create / initialize job instances.
In the description language of Quartz, we call the saved JobDetail as "job definition" or "JobDetail instance", and an executing job as "job instance" or "job defined instance". When we use "job", it generally refers to the job definition or JobDetail; When we refer to the class that implements the job interface, we usually use "job class".
Job status and concurrency
There are still some points to pay attention to about job status data (i.e. JobDataMap) and concurrency. Annotations can affect the concurrency of these classes.
@Disallowcurrentexecution: add this annotation to the job class and tell Quartz not to execute multiple instances of the same job definition (this refers to a specific job class) concurrently. Please pay attention to the words here. Take the example in the previous section. If there is this annotation on the "SalesReportJob" class, only one instance of "SalesReportForJoe" is allowed to be executed at the same time, but one instance of "SalesReportForMike" class can be executed concurrently. Therefore, this restriction is for JobDetail, not job class. However, we believe that the annotation should be placed on the job class (when designing Quartz), because changes in the job class often lead to changes in its behavior.
@PersistJobDataAfterExecution: add this annotation to the job class and tell Quartz to update the data of the JobDataMap in the JobDetail after successfully executing the execute method of the job class (without any exception), so that when the job (i.e. JobDetail) is executed next time, the updated data in the JobDataMap is not the old data before updating. Like the @ disallowcurrentexecution annotation, although the annotation is added to the job class, its restrictive effect is for the job instance, not the job class. The annotation is carried by the job class because the content of the job class often affects its behavior state (for example, the execute method of the job class needs to explicitly "understand" its "state").
If you use the @PersistJobDataAfterExecution annotation, we strongly recommend that you use the @disallowcurrentexecution annotation at the same time, because when two instances of the same job (JobDetail) are executed concurrently, the data stored in the JobDataMap is likely to be uncertain due to competition.