mark nottingham

Click Submit Only Once

Saturday, 13 September 2003


I shudder when I see these words. Everyone I’ve asked has, at least once, gotten two orders of something online (personally, I’ve had the SonyEricsson store ship three duplicate orders); “Click Submit Only Once” is intended to stop that. The problem is, it puts me and every other shopper between a rock and a hard place.

That’s because if there’s any problem with my browser, my computer’s network connection, the Internet itself or the server, I don’t know the status of my order. You know, that blank screen and eternal “waiting” cursor. I don’t have any way of seeing whether the order made it, except by backing out and waiting for some kind of confirmation e-mail (and that sometimes takes days). On the other hand, if I go ahead and resubmit, there’s a real possibility that I’ll get two (or three) of everything.

For the technical-minded, this is all because HTTP isn’t a “reliable” protocol; in other words, there are situations where the server and the client have different ideas of what’s happened, or don’t know what’s happened, with a particular request. For the most common HTTP method, GET, this doesn’t matter; it’s “safe,” which means that GETting doesn’t affect state on the server, and “idempotent,” which means that you can repeat a GET if you’re not sure what’s happened, with no ill effect. However, POST - the method used to submit orders, among other things - does affect state, and often it very much matters how many times you do it.

That said, this is a common problem that’s pretty easy to fix; it doesn’t require any fancy server footwork, browser plug-ins or HTTP extensions. You can avoid it with a very simple design pattern in your Web application. Roughly, it looks like this:

  1. The browser requests the Web page that contains the final order form (with the ‘Submit’ button in it),
  2. The server sends back a page whose form has a unique ‘action’ link; e.g., “”,
  3. The user submits the form, which is POSTed to the ‘action’ link, in turn submitting the order.
  4. The server sends back a page that says that the order has been successfully submitted.

Here’s the good part: if the server receives more than one POST to any particular ‘action’ link, it should generate a message saying “This order has already been submitted,” perhaps along with a summary of the order and its status. This way, if the user does, for whatever reason, click “submit” twice, it won’t cause a duplicate order.

To make it even better, the server can put a notice on the ‘submit’ page: “If there is a problem with your order, click in the ‘location’ bar and press return.” In most browsers, this will cause the request to be resubmitted to the server, but as a GET, not a POST. The server can then show the status of the order if it was successful, and an error page (perhaps with a link back to the submit page) if it wasn’t. As a bonus, this page can act as the order status page.

There are a few things that the server needs to do to make sure this technique works. First of all, the page containing the ‘submit’ link can’t be cacheable; otherwise, it’ll be difficult to make a second order (the server will think you’re resubmitting the first). This can be accomplished by using POST to get the ‘submit’ page itself, or by using cacheability controls on the response. Secondly, the link to the order page really has to be unique for each logical order; the server might ensure this by using UUIDs, sequential numbers, or orders for each customer (e.g. “/orders/user@email.address/43”. Finally, it needs to properly respond to GET and POST on the order URI.

Note that this isn’t “real” reliability, by some definitions; it doesn’t take care of message ordering, for example. However, it is good enough for making sure your customers don’t get two (or three!) of everything. I’d really like to see this pattern or something like it baked into the software toolkits that people use, so we can get rid of “Click Submit Only Once.”

I’ll leave “Don’t Use Your Browser’s Back Button” for another day; that one really bugs me…

P.S. After writing this on a plane, I did a Google on “ Reliable POST,” which got me Paul Prescod’s thoughts on the subject. I think we’re singing the same tune…

UPDATE: See POE, a more formal proposal in this space.


aaron said:

amen, brother. junior varisty programmers…

anyway. dude, my templates could kick your templates! check out my webrog.

Monday, September 22 2003 at 10:05 AM

Greg Jorgensen said:

I’ve used a similar but (I think) easier technique. It requires maintaining session state on the server side, but so does the technique described above (you have to keep track of which unique action pages were posted or not). The technique I use is less susceptible to client-side hacks, such as monkeying with the unique “action ID.”

  1. Server-side app maintains session state for each user. This is generally done with either a cookie on the client side or a token passed around in the URLs. The cookie/token is just a unique (and long) ID used to access the actual session information on the server. PHP and ASP support this natively.

  2. When the user initiates an action that will end up with a “submit only once” page, like a shopping cart checkout process, set a variable in the session to some value indicating the action wasn’t submitted yet. For example I use ordernumber = “”.

  3. When the action is confirmed (the “submit only once” page is submitted the first time), check that the variable set in step 2 is still indicating nothing submitted yet. If that’s true, change it to some other value indicating that the page was submitted. I do something like ordernumber = “pending” and then save the session (you may have to explicitly save the session, or redirect to another page to save it), and then change it to ordernumber=”12345” when I get the actual order number from the back-end system or database.

You can add more checks to this if you’re worried about users rapidly double-clicking the submit button, but in practice I’ve never seen any problems.

Greg Jorgensen

Wednesday, March 23 2005 at 2:08 AM

Greg Jorgensen said:

Your article and Paul Prescod’s linked article state that HTTP GET requests are idempotent, and that POST requests are not. That’s oversimplified.

POST requests don’t necessarily change anything on the server side, and may be idempotent.

GET requests may trigger a change on the server side, and therefore may not be idempotent.

Think of how service-oriented architecture works: a message and parameters are passed to an application over HTTP. Whether the message and parameters are encoded as GET or POST requests is immaterial. For example, I could write a server-side application that updates a database with GET requests:

That’s a GET request that is not idempotent.

The type of HTTP request does not determine if the request is idempotent or not; what the server does in response to the request does.

Greg Jorgensen

Wednesday, March 23 2005 at 2:13 AM

Greg Jorgensen said:

Mark wrote:

“That isn’t true; if a GET isn’t idempotent, it violates RFC2616, and all bets are off.”

My point was that GET is not inherently idempotent; GET requests can have side-effects on the server. The two articles I responded to did not make that point, but rather implied that GET requests were always idempotent.

RFC 2616 (section 9.1.1) doesn’t require that GET be idempotent, only that the user is not accountable for side-effects. The RFC refers to user agents, not to applications using HTTP to communicate among themselves. From RFC 2616:

“Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.”

In real life users don’t form expectations about side effects based on whether their action initiates a GET or POST request (if they know the difference). They form their expectation from the application interface. Whether the application uses GET or POST to pass messages and data is irrelevant to the user.

Mark also wrote: “HTTP is an application protocol, not a transport protocol.”

I assume that refers to the OSI 7 Layers Model. I think you’re confusing transport with data transfer. The OSI definition of the Application Layer includes communication and data transfer, which is what how web services use HTTP. Transport protocols have to do with reliable packet transmission, not communicating among processes or transferring application-level data. TCP/IP operates at the transport layer. That is clearly not the right place for web services to talk to each other.

Regardless of whether using GET requests to implement web services violates the RFC or not (I don’t think it does), it’s widely done.

Greg Jorgensen

Wednesday, March 23 2005 at 10:11 AM

Ian Bicking said:

Someone should collect these things as Patterns in Web Programming. Someone wrote a paper on that some time ago, but it didn’t actually contain any good patterns, just descriptions of the components of a web application (which is quite a different thing).

Monday, March 28 2005 at 1:06 AM

Ap said:

I guess one simple way is use javascript to check wether page has been submitted or not. If it has been submitted then simply return

Saturday, September 17 2005 at 4:20 AM

Matt Wilson said:

Great article; I’m happy to see people spreading the word about how GETs shouldn’t change the server state.

Saturday, December 17 2005 at 7:02 AM