SQL: The data model behind MongoDB

You know that MongoDB is a modern NoSQL database, meaning, among other things, that it is schemaless and its data model is more complex than what SQL can handle. However, in this post I will attempt to show you how SQL is sufficient to serve as a data model behind it. Hard to swallow? Follow me as I go through a list of misconceptions I need to debunk and then as I make my case.

Misconception #1. Let’s consider the claim that NoSQL is schemaless. Taking MongoDB as an example, clearly a database that can answer the following, is hardly as amorphous as this word implies.

db.somecollection.find( { foo: "foo" }, { bar: 1} )

This query expects all queried documents (not all documents in the collection, though) to have a “foo” field, which (some times) takes a string value, and a “bar” field (of whatever type). The net sum of all query/update code touching the database contributes to a definition of what data are supposed to be there. The data themselves, on the other hand, have a specific form, which does or does not correspond to this definition. In any case, this definition is a “schema”, by any other name. Consequently, it is more truthful (and some NoSQL material adheres to this truth) to talk about having flexible or dynamic schemas, instead of being schemaless. And keep in mind that the only flexibility is in extending the schema, as no facility exists for any other modification to it.

Misconception #2. Everyone knows that SQL represents “the relational model”. It might do, depending on what you mean by “SQL”. But you can’t possibly mean SQL99, whose data model explicitly encompasses “groups”, “repeating groups” and “nested repeating groups”, i.e. tree data. SQL99 transcends the flat relational model (in other ways, also, which I won’t go into in this post) so, confusing SQL as a standard with what vendors sell in the form of SQL is a classic “straw man” argument.

Nested relational models. Nested relational models can be queried using nested relational algebra. There is nothing extraordinary about nested relational algebra or its corresponding SQL. Its only difference with flat relational algebra is that attributes can be relations and, everywhere attributes can appear (in projection and selection), algebraic expressions can be used.

I’m going to use the example from the MongoDB documentation, which is “a collection named bios that contains documents with the following prototype” (note that it’s not a schema!).

{
  "_id" : 1,
  "name" : {
             "first" : "John",
             "last" :"Backus"
           },
  "birth" : ISODate("1924-12-03T05:00:00Z"),
  "death" : ISODate("2007-03-17T04:00:00Z"),
  "contribs" : [ "Fortran", "ALGOL", "Backus-Naur Form", "FP" ],
  "awards" : [
              {
                "award" : "W.W. McDowellAward",
                "year" : 1967,
                "by" : "IEEE Computer Society"
              },
              {
                "award" : "National Medal of Science",
                "year" : 1975,
                "by" : "National Science Foundation"
              },
              {
                "award" : "Turing Award",
                "year" : 1977,
                "by" : "ACM"
              },
              {
                "award" : "Draper Prize",
                "year" : 1993,
                "by" : "National Academy of Engineering"
              }
  ]
}

A query to return each document with awards from 1990 onwards would be the following:

select 
  _id, 
  name, 
  birth, 
  death, 
  contribs, 
  (select * from awards where year >= 1990) as recentawards
from bios;

A query to return only documents having awards from 1990 onwards would be the following:

select * from bios
where exists (select * from awards where year >= 1990);

It looks like normal SQL, doesn’t it? Unfortunately, it is not sufficient for everything you might want to do.

Shuffling information around. Consider what it would take to collect all distinct contribution names from all bios. Something more is needed to lift information from inside bios.contribs. This operation is called Unnest (μ) and its converse is Nest (ν). Unnesting contribs on a projection of bios that contains just _id and contribs, results in the following relation.

{
  "_id" : 1,
  "contribution" : "Fortran"
}
{
  "_id" : 1,
  "contribution" : "ALGOL"
}
{
  "_id" : 1,
  "contribution" : "Backus-Naur Form"
}
{
  "_id" : 1,
  "contribution" : "FP"
}

Note that the operation of Unnest can be achieved with a join like Sql Server’s CROSS APPLY, where the right joined table expression is evaluated in the context of each row of the left table, like in the following unnesting of awards.

select
  _id, 
  name, 
  birth, 
  death, 
  contribs, 
  a.award,
  a.year,
  a.by
from bios cross apply bios.awards a

Nesting _id is an operation akin to what GROUP BY does before aggregates are computed.

{
  "ids" : [ 1 ],
  "contribution" : "Fortran"
}
{
  "ids" : [ 1 ],
  "contribution" : "ALGOL"
}
{
  "ids" : [ 1 ],
  "contribution" : "Backus-Naur Form"
}
{
  "ids" : [ 1 ],
  "contribution" : "FP"
}

In fact, the operation of GROUP BY can be defined within this model: Nest using the grouping columns, producing a new relation attribute e.g. TheGroup; compute the aggregates on relation TheGroup, producing a single-row relation attribute e.g. TheAggregates; Unnest TheAggregates, producing a flat relation again.

Verso. There is also another viewpoint I found in a data model called Verso, where restructuring is a powerful operation in its own right. Some restructurings can be shown to preserve all information, others to lose some, but it’s a powerful operation that, combined with the rest of nested relational algebra, can express queries that are very cumbersome in flat relational algebra.

The transformation I presented earlier on a projection of bios that contains just _id and contribs would have been a single application of restructuring using the target schema (called “format” in verso). The initial “format” would be written as _id (contribution)*. Restructuring to contribution (_id)* would be done in a single step.

The data model behind MongoDB. Hierarchical databases, insofar as they hold useful data that we need to query, need a way for us to be able to express these queries. Making the conscious decision of not inherently supporting a full-blown query language and delegating this task to application code does not negate the need to be able to define the queries in a concise way. I posit that SQL, in its incarnation that targets hierarchical data, adapted to flexible and untyped schemas and by incorporating the rich amount of relevant research, is sufficiently powerful to serve as a data model for hierarchical databases, like MongoDb.

Infusing MongoDB with some ACID

In the previous post, I demonstrated the obvious, viz. that in the absence of transactions, concurrent clients can step on each others’ feet. And now, we find ourselves in a crossroads: either we admit that there’s a new science of data management whereby all problems have been solved once we demormalized everything into large tree structures, or we accept that there’s still need for transactional support and we need some way to implement it. Lest I make it a habit to state the obvious, I won’t spend time on this dilemma. Let me just say that denormalizing, if anything, widens the need for transactions, since denormalized data must be constantly synchronized. Notwithstanding thousands of Google hits stating with almost religious fanaticism how everyone who seeks “Old SQL” properties in MongoDB just doesn’t get it.

Let’s revisit the item reordering exercise from the last post, and reformulate it in a bit more elementary form: a particular simple way to go about rearranging items in a list, which is swapping locations of two items. With this formulation, the problem now is that we have multiple agents, each needing occasionally to get exclusive access to two shared resources and do some work, preferably without interfering too much with anyone else. Did I jog your memory? I bet I did. This problem is the Dining Philosophers one, of the venerable Edsger Dijkstra.

I will show you that MongoDB is, indeed, up to the task of handling this problem. I even hope that this will appease the MongoDB fans into flaming me less than usual for this kind of heretic talk. Let me set up the universe of discourse by way of showing you the code to create the data, set, get, test and reset them (yes, data is plural, by the way).

open MongoDB.Bson
open MongoDB.Driver

let mutex = new System.Object()
let global_counter = ref 0
let connectionString = "mongodb://localhost/?safe=true";
let server = MongoServer.Create(connectionString);
let mongo = server.GetDatabase("local");
let items = mongo.GetCollection<BsonDocument>("items")

let create_items () =
    let create_item (n:int) =
        let x = (new BsonDocument()).Add("name",BsonValue.Create(n))
        items.Insert(x) |> ignore
    for i in 1 .. 5 do create_item i
    let all = items.Count
    printfn "%A" all

let checkUniq listarr =
    (listarr |> Set.ofList |> Set.count) = listarr.Length

let check_items () =
    let cur = items.FindAll()
    let enumer = cur.GetEnumerator()
    let rec traverse l =
        if enumer.MoveNext () then traverse (enumer.Current :: l)
        else l
    let all = traverse []
    let allord = all |> List.map (fun d -> d.GetValue("ord").AsInt32)
    checkUniq allord

let get_item (n:int) =
    let query = new QueryDocument((new BsonDocument()).Add("name",BsonValue.Create(n)))
    let doc = items.FindOne(query)
    doc.GetValue("ord").AsInt32

let set_item (n:int) (o:int) =
    let query = new QueryDocument((new BsonDocument()).Add("name",BsonValue.Create(n)))
    let upd = new UpdateDocument((new BsonDocument()).Add("$set",BsonValue.Create((new BsonDocument()).Add("ord", BsonValue.Create(o)))))
    items.Update(query, upd) |> ignore

let reset_item (n:int) =
    let query = new QueryDocument((new BsonDocument()).Add("name",BsonValue.Create(n)))
    let upd = new UpdateDocument((new BsonDocument()).Add("$set",BsonValue.Create((new BsonDocument()).Add("ord", BsonValue.Create(n)).Add("locked", BsonValue.Create(0)))))
    items.Update(query, upd) |> ignore

let reset_items () = for i in 1 .. 5 do reset_item i

Simple enough. Five items, identified by integer names from one to five, holding integer places from one to five.

To make a long story short, I’ll skip the part where a naive solution to the problem leads to potential deadlock and adopt the practice of ordering resources to eliminate that. But let me show you how I implement locking, including the busy-wait loop to fill in for the missing blocking wait.

let r (lockfun:int->bool) (n:int) =
    let rec raux i =
        if i = 200 then failwith "Possible livelock" 
        let b = lockfun n
        if b then i
        else Async.RunSynchronously(Async.Sleep 10) ; raux (i+1)
    let i = raux 1
    lock mutex (fun () -> global_counter := !global_counter + i ; printfn "l %A -> %A" n i)

let simple_lock (n:int) =
    let query = new QueryDocument((new BsonDocument()).Add("name",BsonValue.Create(n)).Add("locked",BsonValue.Create(0)))
    let upd = new UpdateDocument((new BsonDocument()).Add("$set",BsonValue.Create((new BsonDocument()).Add("locked", BsonValue.Create(1)))))
    let wc = items.Update(query, upd)
    wc.DocumentsAffected = (int64)1

let simple_unlock (n:int) =
    let query = new QueryDocument((new BsonDocument()).Add("name",BsonValue.Create(n)))
    let upd = new UpdateDocument((new BsonDocument()).Add("$set",BsonValue.Create((new BsonDocument()).Add("locked", BsonValue.Create(0)))))
    let wc = items.Update(query, upd)
    wc.DocumentsAffected = (int64)1

Each swap operation is implemented by the following code.

let rec swap_simple_lock (ix:int) (iy:int) =
    if ix > iy then
        swap_simple_lock iy ix
    else
        r simple_lock ix
        let oix = get_item ix
        r simple_lock iy
        let oiy = get_item iy
        set_item ix oiy
        set_item iy oix
        simple_unlock iy |> ignore
        simple_unlock ix |> ignore

The code to run a full test is the following.

let worker (swapfun:int->int->unit) cnt =
    let random = new System.Random()
    async {
        for i = 1 to cnt 
            do
                let ix = random.Next(4) + 1
                let iytemp = random.Next(4) + 1
                let iy = if ix <> iytemp then iytemp else (iytemp % 4) + 1
                swapfun ix iy
    }

let run_test swapfun cnt wcnt =
    let s = seq { for i in [1 .. wcnt] -> worker swapfun cnt }
    Async.RunSynchronously (Async.Parallel s)

let cnt = 10
let wcnt = 10
let expected_cnt = 2 * cnt * wcnt
run_test swap_simple_lock cnt wcnt |> ignore
let b = check_items ()
printfn "check = %A" b
printfn "Exprected_counter = %A" expected_cnt
printfn "Global_counter = %A" !global_counter

Running the test almost proves that the problem is solved and, for practical purposes, one might also say that it is. However, the only thing we did is push the problem to a corner albeit similar to the corner it is pushed by the “Full SQL” solution. Starvation due to livelock is still possible, particularly since the ordering of resources makes some workers less privileged than others. To demonstrate that, I’ll “rig” the test a little.

let rigged_worker ix iy (swapfun:int->int->unit) cnt =
    let random = new System.Random()
    async {
        for i = 1 to cnt 
            do
                let flip = random.Next(5)
                let ix = if flip = 0 then 1 else ix
                let iy = if flip = 0 then ix else iy
                swapfun ix iy;
        lock mutex (fun () -> printfn "swapped %A, %A for %A times" ix iy cnt)
    }

let run_rigged_test swapfun cnt wcnt =
    let s = seq { for i in [1 .. wcnt] -> rigged_worker 4 5 swapfun cnt }
    Async.RunSynchronously (Async.Parallel s)

run_rigged_test swap_simple_lock cnt wcnt |> ignore

This set of probabilities makes a livelock appear in almost every run, on my machine. So a livelock is still possible, but is it more possible than running similar concurrent transactions in SQL? I must say I do not have an authoritative answer to that but, after monitoring systems running many millions of similar concurrent transactions per day, I can attest to only having witnessed locking of a transaction by another when the latter took more time than expected. Modern RDBMS’s include mechanisms to minimize starvation (here’s an article I found to that effect).

So, is this the end of it? I hope not, and I have some ideas on now to implement a solution that goes a bit further in the way of avoiding starvation. Rest assured that, should they come to fruition, you’ll be the first to know.

Missing the A in ACID

The A in ACID stands for “Atomic”, and I’m going to present some experiments about what can go wrong, and how often, if the A is not there. And it’s not there when you’re working with so-called NoSQL databases, in this case MongoDB. A post on MongoDB might seem peculiar, given my evident penchant for high-level abstractions, when the post-I-should-be-writing would have probably been how recursive SQL traverses a non-deterministic automaton, but things have been straying from normal lately. I’ve been downsized from my daytime job and focused more on Flying Donut. Until things settle on a new normal (if unemployment can be called that) I dug up old Flying Donut material to fill the void. Not to mention that it’s one more opportunity to show how well F# fares in such simulation tasks.

The context is the apparently simple action of moving an item from one place of a list to another. Conventional MongoDB wisdom says that, in the absence of atomic transactions, you design so that you have to update whole documents or sub-documents at once. Since I was near the Flying Donut team from the beginning, I learned about an early debate on how to rearrange items of a list, and I decided to run some experiments and find out how often can things go bad when parallel modifications are done without atomicity.

Both experiments are on a linked list representation. The encoding is terse: nodes are numbered 0 to N and the “next node” function is represented by an array. There are locals numbered from 0 to 1, also stored in an array. You can read the actual code in the end of the post.

The first experiment concerns fine-grained actions that would be used in a typical server-side implementation of moving a node. It uses the following six atomic actions: T0 = X.next, X.next = T0.next, T1 = Y.next, T0.next = T1, Y.next = T0 (T0, T1 are the temps). I simulated all possible interleavings of those six actions between two clients, the first moving a node from 1 to 3 and the second moving a node from 2 to 4. The result is havoc: of the 252 interleavings, 60 ended up with a well-form list, and 192 ended up with loops or unlinked items (finegrainedRun).

The second experiment concerns actions that would be used in a client-side implementation which took advantage of knowledge of the whole state, to avoid extra reads. It makes use of the following actions: X.next = NX.next, NX.next = Y.next, Y.next = NX, implementing the macro move MOVE(x,nx,nnx,y,ny), meaning: Move nx, which is currently between x and nnx, to between y and ny. I thought that providing more information would make things better, but I was wrong. The client-side version is even less resilient than the server-side one, probably because the server-side case could adapt to the current state of the list and recover in some cases. 20 interleavings, all malformed (coarsegrainedRun)…

Conventional MongoDB wisdom seems to win, this time. There’s no substitute for the A in ACID. You have to save the whole list at once, using the only kind of A in MongoDB: a single update operation.

Just kidding. You know I’m no fun of conventional wisdom. I can’t possibly end the post with that remark. Although I do recommend that you follow the conventional wisdom in similar MongoDB problems, I’m going to be the devil’s advocate and show you how it is possible for a number of concurrent processes to move items around in a list, without interference, using enough of the I to make up for the lack of the A. But, that has to wait until the next post!

The code

let rec interleave lst1 lst2 =
  match lst1, lst2 with
    | [], ys -> [ys]
    | xs, [] -> [xs]
    | x :: xs, y :: ys ->
        (List.map (fun zs -> x :: zs) (interleave xs (y::ys))) @
        (List.map (fun zs -> y :: zs) (interleave (x::xs) ys))

//let show () =
//    let res = interleave [1;2;3] [11;12;13]
//    res |> List.iter (fun x -> printf "["; x |> List.iter (printf "%O "); printf "]\n")
//    printfn "Done"

//printfn "Interleavings"
//do show ()

type locals = int array

type thelist = int array

type state = State of locals array * thelist

let checkUniq listarr =
    (listarr |> Set.ofArray |> Set.count) = listarr.Length

let showState2 listarr =
    printf "["
    listarr |> Array.iter (printf "%O ")
    printf "] : %O\n" (checkUniq listarr) 

let showState (State (locarr, listarr)) =
    printf "Process0 T0=%O T1=%O, Process1 T0=%O T1=%O, " locarr.[0].[0] locarr.[0].[1] locarr.[1].[0] locarr.[1].[1]
    showState2 listarr 

type movefun = int -> int -> int -> state -> state

// T0 = X.next
let getNextX (x:int) (y:int) (i:int) (State (locarr, listarr)) : state =
    let mylocals = locarr.[i]
    mylocals.[0] <- listarr.[x]
    State (locarr, listarr)
    
// X.next = T0.next
let linkX (x:int) (y:int) (i:int) (State (locarr, listarr)) : state =
    let mylocals = locarr.[i]
    listarr.[x] <- listarr.[mylocals.[0]]
    State (locarr, listarr)

// T1 = Y.next
let getNextY (x:int) (y:int) (i:int) (State (locarr, listarr)) : state =
    let mylocals = locarr.[i]
    mylocals.[1] <- listarr.[y]
    State (locarr, listarr)

// T0.next = T1
let linkNextX (x:int) (y:int) (i:int) (State (locarr, listarr)) : state =
    let mylocals = locarr.[i]
    listarr.[mylocals.[0]] <- mylocals.[1]
    State (locarr, listarr)

// Y.next = T0
let linkY (x:int) (y:int) (i:int) (State (locarr, listarr)) : state =
    let mylocals = locarr.[i]
    listarr.[y] <- mylocals.[0]
    State (locarr, listarr)

let finegrainedUnit () =
    let lst = [|1;2;3;4;5;6|]
    let loc1 = [|0;0|]
    let loc2 = [|0;0|]
    let state = State ([|loc1;loc2|],lst)
    let actions = [ getNextX 1 3 0; getNextX 2 4 1; linkX 1 3 0; linkX 2 4 1; getNextY 1 3 0; getNextY 2 4 1; linkNextX 1 3 0; linkNextX 2 4 1; linkY 1 3 0; linkY 2 4 1 ]
    let finalstate = actions |> List.fold (fun s fn -> fn s) state
    showState state
    ()

printfn "Linked list example"
do finegrainedUnit ()

let applyactions actions = 
    let lst = [|1;2;3;4;5;6|]
    let loc1 = [|0;0|]
    let loc2 = [|0;0|]
    let state = State ([|loc1;loc2|],lst)
    let res = actions |> List.fold (fun s fn -> fn s) state
    state

let finegrainedRun () =
    let actionsseq = [ getNextX ; linkX ; getNextY ; linkNextX ; linkY ]
    let actionsf x y i = actionsseq |> List.map (fun f -> f x y i)
    let actions1 = actionsf 1 3 0
    let actions2 = actionsf 2 4 1
    
    let interleavings = interleave actions1 actions2
    let res = interleavings |> List.map applyactions
    res |> List.iter showState
    ()

printfn "Linked list"
do finegrainedRun ()

type movefun2 = int -> int -> int -> int -> int -> thelist -> thelist

// X.next = NX.next
let linkX2 (x:int) (nx:int) (nnx:int) (y:int) (ny:int) (listarr:thelist) : thelist =
    listarr.[x] <- nnx
    listarr

// NX.next = Y.next
let linkNextX2 (x:int) (nx:int) (nnx:int) (y:int) (ny:int) (listarr:thelist) : thelist =
    listarr.[nx] <- ny
    listarr

// Y.next = NX
let linkY2 (x:int) (nx:int) (y:int) (nnx:int) (ny:int) (listarr:thelist) : thelist =
    listarr.[y] <- nx
    listarr

let coarsegrainedUnit () =
    let lst = [|1;2;3;4;5;6|]
    let loc1 = [|0;0|]
    let loc2 = [|0;0|]
    let actions = [ linkX2 1 2 3 3 4; linkX2 2 3 4 4 5; linkNextX2 1 2 3 3 4; linkNextX2 2 3 4 4 5; linkY2 1 2 3 3 4; linkY2 2 3 4 4 5 ]
    let finalstate = actions |> List.fold (fun s fn -> fn s) lst
    showState2 lst
    ()

printfn "Linked list 2nd set of actions example"
do coarsegrainedUnit ()

let applyactions2 actions = 
    let lst = [|1;2;3;4;5;6|]
    let loc1 = [|0;0|]
    let loc2 = [|0;0|]
    let res = actions |> List.fold (fun s fn -> fn s) lst
    lst

let coarsegrainedRun () =
    let actionsseq = [ linkX2 ; linkNextX2 ; linkY2 ]
    let actionsf x nx nnx y ny = actionsseq |> List.map (fun f -> f x nx nnx y ny)
    let actions1 = actionsf 1 2 3 3 4
    let actions2 = actionsf 2 3 4 4 5
    
    let interleavings = interleave actions1 actions2
    let res = interleavings |> List.map applyactions2
    res |> List.iter showState2
    ()

printfn "Linked list 2nd set of actions"
do coarsegrainedRun ()

printfn "Done"