[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-kernel
Subject:    merging discard request in the block layer
From:       Christoph Hellwig <hch () infradead ! org>
Date:       2011-03-22 19:47:55
Message-ID: 20110322194755.GA20122 () infradead ! org
[Download message RAW]

It seems the current block layer wil happily try to merge discard
requests that were split because they are at the max that bi_size
can hold together again.  At least that's what the

	blk: request botched

make me believe when testing XFS code that allows multiple
asynchronous discard request, unlike the current blkdev_issue_discard
which always waits for one before starting the next.

I tried this little sniplet to prevent it:

Index: xfs/block/blk-merge.c
===================================================================
--- xfs.orig/block/blk-merge.c	2011-03-22 13:07:24.733857580 +0100
+++ xfs/block/blk-merge.c	2011-03-22 13:08:17.448856577 +0100
@@ -373,7 +373,7 @@ static int attempt_merge(struct request_
 	/*
 	 * Don't merge file system requests and discard requests
 	 */
-	if ((req->cmd_flags & REQ_DISCARD) != (next->cmd_flags & REQ_DISCARD))
+	if ((req->cmd_flags & REQ_DISCARD) || (next->cmd_flags & REQ_DISCARD))
 		return 0;
 
 	/*

but it has no effect.  Using the big hammer and bypassing the whole
I/O schedule logic on the other works fine:

Index: xfs/block/blk-core.c
===================================================================
--- xfs.orig/block/blk-core.c	2011-03-22 13:07:24.717855861 +0100
+++ xfs/block/blk-core.c	2011-03-22 14:56:13.424856289 +0100
@@ -1218,7 +1218,7 @@ static int __make_request(struct request
 
 	spin_lock_irq(q->queue_lock);
 
-	if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
+	if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) {
 		where = ELEVATOR_INSERT_FRONT;
 		goto get_rq;
 	}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic