I prefer simplicity and using the first example but I’d be happy to hear other options. Here’s a few examples:

HTTP/1.1 403 POST /endpoint
{ "message": "Unauthorized access" }
HTTP/1.1 403 POST /endpoint
Unauthorized access (no json)
HTTP/1.1 403 POST /endpoint
{ "error": "Unauthorized access" }
HTTP/1.1 403 POST /endpoint
{
  "code": "UNAUTHORIZED",
  "message": "Unauthorized access",
}
HTTP/1.1 200 (🤡) POST /endpoint
{
  "error": true,
  "message": "Unauthorized access",
}
HTTP/1.1 403 POST /endpoint
{
  "status": 403,
  "code": "UNAUTHORIZED",
  "message": "Unauthorized access",
}

Or your own example.

  • gencha@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    I don’t necessarily disagree, but I have spent considerable time on this subject and can see merit in decoupling your own error signaling from the HTTP layer.

    No matter how you design your API, if you’re passing through additional layers, like load balancers and CDNs, you no longer have full control over all responses your clients receive. At this point it may be viable to always signal a successful backend connection with a 200, even if the process resulted in a failure.

    Going further, your API may include partial success scenarios, think batch processing, then the result could be a mix of success and failure that doesn’t translate to HTTP status.

    You could even argue that there is really no reason to couple your API so tightly with a concept of the transport layer it uses.