From: Oleg Nesterov <oleg@redhat.com> Date: Sun, 30 May 2010 16:19:22 -0400 Subject: [misc] workqueue: initial prep for cancel_work_sync Message-id: <20100530161922.GB9577@redhat.com> Patchwork-id: 25904 O-Subject: [RHEL5 PATCH 1/7] bz#596626: workqueues: initial simple preparations for cancel_work_sync Bugzilla: 596626 RH-Acked-by: Stanislaw Gruszka <sgruszka@redhat.com> RH-Acked-by: Prarit Bhargava <prarit@redhat.com> The first patch in series to implement cancel_work_sync() and cancel_delayed_work_sync() in rhel5, needed to backport the drivers which need this functionality. The intent was to make as less as possible changes to the current rhel5 code. Certainly this could be done in a more clean way. For example, the usage of bit 1 in set_wq_data/get_wq_data is really hackish. Instead, we could change the code to not use ->wq_data at all and put cwq into ->pending (like upstream does). But this requires more changes. Or, we can just copy-and-paste the code from upstream. But then, again, we need much more changes to adapt it to rhel5's "struct work_struct", and to the old cpu-hotplug code. This patch: (changes the existing code). 1. Introduce cwq->current_work. run_workqueue() sets/clears this pointer under cwq->lock before/after it calls work->func(). Nobody use this pointer so far. 2. Add smp_wmb() into __queue_work() after setting work->wq_data, needed for try_to_grab_pending(). 3. Change __create_workqueue() to do some basic initializations on each possible (not online) cpu in advance. This is needed because wait_on_work() (not yet implemented) must not use workqueue_mutex and thus it can't trust cpu_online_map, to avoid the races with cpu-hotplug it should do for_each_possible_cpu() and check cwq->current_work under cwq->lock. 4. Not strictly needed, but since cwq_basic_init() does spin_lock_init(&cwq->lock) we can kill this line in create_workqueue_thread(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 8594efb..25758a3 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -45,6 +45,7 @@ struct cpu_workqueue_struct { long remove_sequence; /* Least-recently added (next to run) */ long insert_sequence; /* Next to add */ + struct work_struct *current_work; struct list_head worklist; wait_queue_head_t more_work; @@ -87,6 +88,11 @@ static void __queue_work(struct cpu_workqueue_struct *cwq, spin_lock_irqsave(&cwq->lock, flags); work->wq_data = cwq; + /* + * Ensure that we get the right work->wq_data if we see the + * result of list_add() below, see try_to_grab_pending(). + */ + smp_wmb(); list_add_tail(&work->entry, &cwq->worklist); cwq->insert_sequence++; wake_up(&cwq->more_work); @@ -214,6 +220,7 @@ static void run_workqueue(struct cpu_workqueue_struct *cwq) void (*f) (void *) = work->func; void *data = work->data; + cwq->current_work = work; list_del_init(cwq->worklist.next); spin_unlock_irqrestore(&cwq->lock, flags); @@ -223,6 +230,7 @@ static void run_workqueue(struct cpu_workqueue_struct *cwq) spin_lock_irqsave(&cwq->lock, flags); cwq->remove_sequence++; + cwq->current_work = NULL; wake_up(&cwq->work_done); } cwq->run_depth--; @@ -328,13 +336,20 @@ void fastcall flush_workqueue(struct workqueue_struct *wq) } EXPORT_SYMBOL_GPL(flush_workqueue); +static inline void cwq_basic_init(struct workqueue_struct *wq, int cpu) +{ + struct cpu_workqueue_struct *cwq = per_cpu_ptr(wq->cpu_wq, cpu); + + spin_lock_init(&cwq->lock); + cwq->current_work = NULL; +} + static struct task_struct *create_workqueue_thread(struct workqueue_struct *wq, int cpu) { struct cpu_workqueue_struct *cwq = per_cpu_ptr(wq->cpu_wq, cpu); struct task_struct *p; - spin_lock_init(&cwq->lock); cwq->wq = wq; cwq->thread = NULL; cwq->insert_sequence = 0; @@ -370,6 +385,9 @@ struct workqueue_struct *__create_workqueue(const char *name, return NULL; } + for_each_possible_cpu(cpu) + cwq_basic_init(wq, cpu); + wq->name = name; mutex_lock(&workqueue_mutex); if (singlethread) {