I agree that order matters. It matters in pretty much every application that uses data. Even HN has data ordered by newest, rank (best) etc. And Arc has a tonne of functionality to support sorting and comparing data to let you do that very thing.
But I don't see the need to order data in your application equating to the need for tables that support constant insertion order.
I 'm not going to say what you're doing is wrong because it's not, but I am going to say it's non-standard and I think there other ways to manage your data that doesn't require ordered tables (which have downsides with data growth). It's just my opinion, but there's not much you can't do with the standard storing of table records and maintaining of indexes.
That said if your data load is always low with little to no growth it could very well be a good fit.
It's only more memory efficient than its previous incarnation.
It's an interesting implementation though. The use of sparse arrays to index the entries is compelling. Measuring this kind of stuff can be non-trivial though as you also have to account for gc (and/or compaction) at different times within your application. There are always trade-offs.
Personally I haven't compared the various implementations (clojure vs. racket vs. python) such that I can give you any real insight. I know clojure's (or rather java's) array-maps are costly when performing iterative lookups within large data sets because I've benchmarked it.
The best bet is to see how an implementation works for your app with your data.
> "The microbenchmarks that we did show large improvements on large and very large dictionaries (particularly, building dictionaries of at least a couple 100s of items is now twice faster) and break-even on small ones (between 20% slower and 20% faster depending very much on the usage patterns and sizes of dictionaries)."