I find there's two things which Object-Relational Mappers like Django's provides. One infrequently useful one is relatively constructing queries from user input.
The other is versioning of your database schema, allowing you to easily upgrade and downgrade your databases. Not all ORMs provide this, but handcoding that logic I miss it.
Also I hear many developers don't like SQL's syntax, but I think that's a mix of it being overpowered for them and them haven't really had tried it.
I actually find it kind of silly that with all the AI hype, modern "noSQL" databases tend to be dumber then what existed before.
There is after all significant overlap between my "AI" education and relational database query planners.
But I would be interested in knowing whether there's a tool focused on generating database migration scripts. I doubt I could use it now for Odysseus without breaking things, but I do wish I knew about such a tool.
@alcinnz There's plenty of overlap. Neural networks, after all, can be described as a graph.
You may find this interesting:
@alcinnz Data migration is one of those tasks where using the full power of SQL comes in pretty handy. It also isn't something I easily trust to tooling - tools don't really understand your data, after all.
In a sense I wonder if we're even talking about the same thing. What kind of migrations did you have in mind that can be easily automated?
@ayo The situation I have with Odysseus is that Odysseus keeps local databases on everyone's computers and I have to make sure they're upgraded to the latest schema corresponding to the version you're running. So far that's mostly involved adding tables and columns.
I've got a script which does this on startup, but I have frequently found it doesn't deal well with branches operating on different schema versions.
@ayo That's my situation which leaves me wishing for that.
But I'm specifically thinking of a system like: https://docs.djangoproject.com/en/2.1/topics/migrations/
Though I do like SQL's syntax.
@alcinnz Ah okay, adding tables and columns is indeed relatively simple - I've often dealt with transformations of existing data, which automation isn't likely to get right.
With different branches, do you mean that your database schema evolution isn't linear? I've toyed with diesel.rs, which seems to treat each migration as separate and independent, but dealing with non-linear evolution feels pretty fragile even with that.
@ayo I've kept my migrations linear, just sometimes I don't want to explicitly think about these migrations when I don't have to.
The bigger problem is that I do find branches useful for developing the code around it. But when I branch the surrounding code I have to think about whether the wrong migrations will be applied. And about whether I am about to branch the migrations.
@alcinnz So far I've always been able to make the code in my active branches work with the most recent version of the database schema (and "the most recent database schema" hasn't been ambiguous so far), but I can indeed see how things can get complex and confusing pretty fast.
@ayo I can see that, if you keep your schema + migrations outside your application code.
@alcinnz Raises hand: used SQL all my life, possibly more than 20 years, I don't even remember. I hate it. It's a remnant from a long gone era that only makes sense for a few use cases. All useful data is linked, it makes no sense to link it at query time every time you run a query. I think multi-model keystore-document-graph databases like OrientDb and Arango (which provide a SQL-like interface on top) are a good compromise. Personally I prefer no SQL at all these days.
@haitch I can agree that whether you use SQL should be dependent on what data you have and how you want to use it.
If you're just using it to look things up by ID, SQL is more powerful then you need and you'd benefit from something simpler.
And if your data is "open world" (incorporates arbitrary external data) you'd benefit from using a language like SPARQL instead. Though not enough sites currently have this property.
But I like SQL for what it's good at.
@alcinnz And it's good at... ?
@haitch Analysing a finite/closed-world dataset.
@alcinnz 1) Is the schema mutable?
2) Does the data really represent documents, possibly with nested elements?
3) Is the data linked, with possibly several many to many relayionships?
4) Do you often find yourself writing LEFT OUTER JOIN?
5) Does the data have a graph-like structure and you sometimes nedd to find shortedt paths between data?
If any of these things is true, then SQL is not the best tool for the job. Nowadays, it almost never is unless there's legacy code involved.
@haitch I'm not sure that I agree on how rarely SQL is suited to your task, but I don't care enough to argue it further.
All I'm saying is don't discount SQL for your project because it's old. Discount it because you're not really processing your data, or because you need to deal with plenty of external data you don't want to or can't injest.
@alcinnz I should have clarified that 'I use SQL because it's useful, familiar, and less costly to me to use what I already know' is a perfectly valid reason. I don't reject SQL because it's old, but because there are objectively superior tools for most of the use cases where we just used SQL without much thought in the past because there really weren't many comparably powerful options available. And that's factually just not the case anymore.
Anyway, sorry, didn't mean to derail, i just don't want my position to be misinterpreted.
@haitch Yes, too often we use tools because it's "in our stack" rather than thinking critically about whether it's appropriate.
But also there's the aspect that it's well suited to silos, which are things I wish didn't exist.