As many people know, a database engine is usually designed and built to either do transactions well (OLTP) or analytic operations well (OLAP), but not both. There are a few databases on the market that claim to do both kinds, but I don't know how well they do in each case.
I have created a new kind of general-purpose data management system called Didgets that uses a bunch of key-value stores I invented to attach metadata tags to data objects. It turns out that these same key-value stores work really well in creating columnar-store relational tables. I have created some pretty big tables with it (1000 columns, 200M rows) and it does queries extremely fast.
Like other columnar stores, it seems to do analytic operations faster than row-oriented systems, but it can also do transactions pretty fast too. On my desktop I can insert additional rows into a 15M row, 10 column table at a rate of about 100K/sec. The latest video on my youtube channel (https://www.youtube.com/channel/UC-L1oTcH0ocMXShifCt4JQQ) shows it doing that while also doing analytic operations as the data grows.
Since my experience with file systems is much more than with databases (I don't build data warehouses and do ETL); I don't know how important this feature is. I don't want to waste time refining this ability and trying to promote it if no one finds that feature particularly useful.
Who is using mixed workload databases and why?
For the HTAP systems, as mentioned in the above blog, there are quite a few industrial products, like Google just announced AlloyDB (https://cloud.google.com/alloydb), Snowflake's UniStore (https://www.snowflake.com/workloads/unistore/), and one of the most popular open source projects TiDB (https://github.com/pingcap/tidb) which have been deployed by many business applications.
Hopefully these may help a little bit :-)