How Postgres is Misused and Abused in the Wild
This post contains the slides from the talk that I gave at PGConf.dev
in Vancouver a few weeks ago, together with the transcript and other information.
This is my new approach to sharing slides, inspired by
Simon Willison's annotated presentations.
You can also watch the video here.
Background
I initially thought that I didn't have anything to share at a PostgreSQL development conference because I'm not a Postgres developer.
On reflection, I realised that I get a lot of insights from seeing what my customers and other users are doing, or trying to do, with Postgres.
We can learn lessons from these insights to help us make Postgres, its extensions and related tools even better.
Talk Abstract
People never cease to amaze me with the unexpected and imaginative ways they come up with to misuse the various tools in the Postgres ecosystem.
Of course, as a Solutions Architect, I try to steer them gently back on course,
but there are usually lessons to be learnt from the things they’re trying to do.
Maybe the tool the they're using doesn’t have the functionality they want or need, so they implement a workaround.
Maybe the functionality they need actually does exist, but they couldn’t find the information they needed in the documentation.
Perhaps a different tool would actually be a better choice, but the user wasn’t aware it existed.
Sometimes, the behaviour the user's looking for is a really, really bad idea and that’s why the functionality doesn’t exist.
I’ll recount some of the (mis)use-cases I’ve come across, not (just) for amusement purposes,
but so we can look at the real-world ways in which people are using Postgres,
and how we can take lessons from that to improve the tools in the Postgres ecosystem.
So, let's talk about how Postgres is misused and abused in the wild:
First, just for fun, a few facts about me (skip this bit if you've already seen one of my talks or read my previous posts):
This is me in my happy place, splattered in mud!
I live in the mountains in France and when I'm not playing with databases you'll often find me out on one of my bikes -
road, gravel or mtb depending on how I’m feeling.
I took advantage of the fact that I was in Vancouver for PGConf.dev to head to Whistler for the weekend.
I’m a Senior Solutions Architect at Crunchy Data, which I came to via a long history as a database administrator,
I was recently elected to the PostgreSQL Europe Board,
and I’m leading the (very) recently established PostgreSQL Europe diversity task force.
I like speaking at Postgres conferences, but also at developer events.
I enjoy going to PyCons, DjangoCons, Ruby Conferences, Devoxx, Kubecon etc.
meeting some of the people who use Postgres, sharing database knowledge
and spreading the love for Postgres outside of the Postgres community.
When I speak at developer events, I usually start with the confession that I’m not a developer,
but I show the audience this diagram of my career to date to reassure them that
I've worked with databases for all this time and that I really do know about databases.
At PGConf.dev, I felt the need for two apologies/confessions.
Firstly I’m not a developer, and secondly many of the people in that audience know as much about databases as I do,
and in many cases, they know much more.
My job titles over the past 25 years haven’t varied much; all but the current one contained the word database.
But although I was allegedly one of the database experts in each of the companies I’ve worked for,
I’ve always been a user rather than a creator or maintainer of the RDBMS software.
It wasn’t until I started working with Postgres that it even crossed my mind that I could
or should be involved with actually helping to shape the software that I use.
Just for a bit of humour in the slides, I shared a cartoon from my new favourite Instagram account in_otternews.
This one describes pretty much how I feel day to day, explaining how to do things I don't do myself anymore.
There's a very long red line on this alternative view of my career to date.
I started out as an Oracle database administrator 25 years ago now.
I did a tiny bit of Sybase and SQL Server, but I tried to move away from those as quickly as I could,
and I'm ashamed to say that I didn't actually discover PostgreSQL until 2019.
To my credit though, once I discovered it, it didn't take me long to ditch the red part of the graph
and move over to working exclusively with Postgres.
And I'm very much hoping that the blue line will continue for a long time to come.
So although you can see I'm a relative newcomer to the project,
it does mean that I come with a background of having worked with other systems,
including big proprietary systems like Oracle.
I think having that different perspective can actually be really valuable.
From everything I've said so far, it's fairly safe to assume that I love Postgres.
Given how quickly I ditched Oracle once I discovered Postgres,
and how invested I am in the Postgres community.
It really is an amazing project.
As we all know, Postgres has many of the features that are available in the big, proprietary database systems,
as well as some that aren't, and it holds its own in terms of performance and reliability,
and we have a rich community, full of people who love Postgres and who want to help people to get the most out of it.
That means, in my experience, that the support is better than you get from the proprietary systems
even before you consider the professional services on offer from the various PostgreSQL companies.
One of the things that we all hear from users is “It just works” and
I’ve been overwhelmed by how much love there is for Postgres in various developer communities.
But of course, there are always things we can do better.
The talk takes inspiration from some of the creative ways that people find to use Postgres.
It's easy, obviously, when we're in those situations to just tell the user not to do something,
or to ask "Why on Earth are you doing that?"
But there are often good reasons behind decisions that don’t fit our vision of what “should” be done with the software,
and some of those reasons might actually indicate that we need to improve things.
Here's what often happens, and I know I can be guilty of feeding into this cycle:
- The user is doing something strange, or at least something that doesn’t make sense to us.
- We find out about it at the "something goes wrong" stage.
There's a message on the mailing list, there's a support ticket, there are slack messages...
There’s presumably other bizarre stuff happening all over place that we don’t know about
because no-one’s flagged an issue related to it.
- The immediate response is usually to say "Don't do that" because, to us,
it’s obvious that it’s not a good idea.
- In that case if the user can work around what's happening,
if they've got some kind of fix for their immediate issue,
they'll completely ignore us and carry on doing whatever it was they were doing.
Nobody learns very much in that scenario.
What should we do instead?
- The user's still going to do something strange, and something's still going to go wrong.
We're not going to change that! So what should we do after that point?
- We try to understand why the user is doing that.
What are they actually trying to do? What's their use case? What are their actual requirements?
What else did they try before they came to the decision that this was the right way to go?
- Once we've done that, then it's probably fine to do some education;
to gently explain why actually that's probably not the best way of doing things.
It's probably not a good idea to set max_connections to a million,
to allocate a millicore to your database nodes,
to only use pg_dump as your backup strategy.
- And then we can suggest and explain some alternatives to them;
"Have you thought about doing this?" or "This is what we consider to be best practice",
and try and explain how they might do things.
- And then once we've done that, try and think about what the exchange actually tells us. Why did the user think that was the best solution, and most importantly, how can we improve that? What can we do to help people to make the right decisions?
It is obviously really easy to blame the user, saying things like
- "You're using the wrong tool for that."
- "You shouldn't be implementing that architecture."
- "You've set those parameters wrong."
- "No, you don't understand, you've not got it." or just
- "You need to upgrade."
As in_otternews says, "Everything happens for a reason, but sometimes the reason is that you're Steve and you make bad decisions."
But maybe... maybe we should think about it slightly differently.
Maybe the user's using the wrong tool because it's the only one they know about,
or because it's the only one that they can understand the documentation for,
or the only one that had enough examples in it that helped them to figure out what they were supposed to do.
Maybe the functionality that the user wants doesn't exist.
Maybe they've implemented a particular architecture because they're working around constraints in their own infrastructure that they can't actually do anything about.
Maybe the most appropriate architecture for their use case isn't well documented or explained.
Maybe the user doesn't understand something because there aren't enough training resources,
or we've not made it clear to the users where they can find the training resources that they needed.
If the user's on an old version, what's holding them back from upgrading?
Why do they feel that they're better off staying on an older version and living with issues rather than upgrading?
Is there anything that we can do to make that seem as though it's not an insurmountable task?
Of course, it's not just Postgres that's used wrong. All sorts of tools are used wrong, and this is one of my all-time favorites:
"Public Health England (PHE) admitted responsibility for the error that meant 15,841 positive coronavirus test results went unreported in England between 25 September and 2 October 2020"
Do you remember the time that Public Health England managed to lose almost 16,000 positive Covid test results because someone opened a file containing millions of Records in Excel?
And when people do things like that they blame the tool, saying "Excel did something wrong."
This is a quote from a PHE official:
"It was down to a combination of human error and IT, that is a human using IT. It's fundamentally an IT error, but there is a bit of human error involved in this too."
Even an article by The Guardian's tech editor suggested that it was Excel that had caused the 16,000 rows to be lost:
"Covid: how Excel may have caused loss of 16,000 test results in England"
And that's one of the things that we need to be mindful of; that regardless of the actual cause of an issue, if something goes wrong when people are using Postgres, they will blame Postgres. They will think that it's Postgres' fault, and obviously we want to avoid that. That's not good for anybody.
We also need to be aware that not everyone's a database expert.
Most of the people in the audience for this talk are database experts.
But I've had to learn to keep in mind the fact that not everyone I talk to is.
The traditional DBA role exists less and less.
My experience is that the development side of databases; designing and creating the schemas,
writing the code etc. is almost always done by development teams.
I have a lot of respect for application developers but they don't necessarily have database knowledge.
When I speak to developers, they tell me that they don't know what data normalization is, they don't know what ACID properties are.
That means there's a lot of a lot of information that we think's evident, because we've lived with it
and we've worked with it for so many years, but actually not everybody knows.
Once these systems go into production, it's often infrastructure teams that are looking after them.
And again, infrastructure teams have a lot of knowledge about all sorts of things, but not necessarily databases.
So they don't know all the intricacies of how you need to manage high availability or disaster recovery for a database system.
I suspect this is the section of the talk that people are most interested in;
the real world examples of of what people are doing.
I haven't named any names, to protect the innocent.
Most of the things that I'll talk about are actually fairly normal. People are doing them again and again.
We see support tickets, we get emails, I get calls from my customers telling me about these things.
And this is obviously only a small selection of the different bizarre and interesting things that people might be doing with Postgres.
Users insist that they have to stay on a specific minor release of Postgres. I quite often hear:
"Oh, but we have to use PostgreSQL 13.7"
(or 15.3 or 12.14).
The documentation is actually pretty unambiguous about this. It says:
"The community considers performing minor upgrades to be less risky than continuing to run an old minor version."
and:
"We recommend that users always run the current minor release associated with their major version."
Source: https://www.postgresql.org/support/versioning/
We're telling users that they should be on the latest minor release, but for some reason this message isn't getting through.
As well as being in the documentation, people have given talks about upgrading Postgres, for example
Lætitia Avrot's Upgrade Postgres! From PostgreSQL Conference Europe 2023 and
Jimmy Angelakos' community panel discussion on Upgradability
at PostgreSQL Conference Europe 2022.
So what are we doing wrong here? What does this kind of exchange tell us?
Should the information be in a more prominent position in the documentation?
Do we need some kind of Public Service Announcement?
Do we need to be walking around with megaphones telling people this?
Other users are stuck on a particular major version of Postgres.
"We can only use PostgreSQL 12" is maybe not be too bad, but we still have customers using older versions of Postgres.
At this point, of course, audience members started to call out their own oldest PostgreSQL version in use.
Someone definitely said 10, and I think earlier versions were mentioned, but I didn't catch those.
There are various reasons for the phenomenon of "having to run a particular major version of Postgres".
Sometimes the database is used to support third-party software, and the third party software vendor has only certified their product against a particular version.
Sometimes the vendor also specifies a particular minor release of Postgres, which obviously goes towards what we saw on the the previous slide.
Sometimes, for all sorts of reasons, people are scared to upgrade.
They are worried that it's going to be a huge project, it's going to take too long, there'll be too much downtime,
they need to make too many application changes, it's too much risk etc.
Sometimes, there really is nothing we can do. When we're asked:
"Can you provide solutions on how we can make it work? The customer will not agree to upgrade."
even though we've already explained that they're hitting a bug or that the functionality doesn't exist in the version that they're using and the solution is to upgrade, it's very difficult to help.
Backup and Recovery gets a whole section to itself
because there are just so many "interesting" things that people do.
As someone who was a DBA for a very long time, I think this is probably the section that worries me the most.
It scares me.
I don't like it when my customers are in a situation where I know that they could lose data.
I want all my customers to be able to restore their database if something goes wrong.
I want that warm, fuzzy feeling of knowing that everything's safe.
We see this a lot:
"What do you mean, I can’t do PITR? I run a pg_dump every day."
Don't get me wrong, of course I think pg_dump is a fantastic utility. It's very useful, it definitely has good reasons to exist and good reasons to be used, but obviously not as a backup and recovery tool if you need point in time recovery.
The first sentence about pg_dump in the PostgreSQL docs says
"pg_dump - extract a PostgreSQL database into a script file or other archive file."
So far so good. But then it says
"pg_dump is a utility for backing up a PostgreSQL database."
Source: https://www.postgresql.org/docs/current/app-pgdump.html
I personally take care not to talk about pg_dump as a backup tool, because I’ve found that it confuses people.
When users hear "for backing up a Postgres database" they think "backup and recovery tool".
The word backup is used a couple more times,
but the rest of the documentation then talks about dumps.
Side note: after the talk, there was some discussion on the mailing lists about changing this wording.
"We need to be able to recover to any point in time so we take backups every 20 minutes."
Maybe we don’t do a good enough job of explaining how point in time recovery works,
and why it's almost certainly unnecessary and counterproductive to take very frequent backups.
I've explained multiple times to one particular user over the last two years
that really they do not need to be running backups every 20 minutes.
Actually, they can't run backups every 20 minutes because their backups take longer than 20 minutes to run.
They've got all of the infrastructure and configuration in place to do point in time recovery.
I've explained to them how it works, and shown them a point in time recovery.
It's there, it works, it's in place, but they still don't believe it.
I like to think I’m fairly good at explaining things, but for some reason I’ve not managed to get the point across.
"We don’t need a database backup tool. We take daily snapshots of our data volumes"
This is not a quote from one customer but from many customers, many users.
It's easy to say "You don't use volume snapshots to back up a database."
but databases are huge now. Multiple terabyte databases are completely normal,
and you've got infrastructure teams looking after these databases.
They've got volume snapshots in place to back up the rest of their infrastructure.
Why why would they even think that the database is any different?
Why would they consider using a different tool to back up the databases?
They either don't want to or can't wait for multiple hours or even days for a huge database to be backed up,
so there are good reasons to be using snapshots as a tool to help with their database backup and recovery.
The challenge comes when they don't consider all of the implications of using just volume snapshots as a backup strategy,
and when the backup and recovery isn't tested, or at least isn't tested under load.
This is another example where the PostgreSQL documentation is clear. It talks about the requirements for using volume snapshots as part of a backup strategy. It states that
- you need a frozen snapshot.
- crash recovery will take place when you restore a volume snapshot and open Postgres.
- you need to carefully read your filesystem documentation.
- If you've got your WAL on a different file system to your data directory, or if you've got multiple tablespaces in different file systems, you need to be able to do a simultaneous frozen snapshot.
Source: https://www.postgresql.org/docs/current/backup-file.html
But the message still, again, doesn't seem to be getting through. I don't know if some of this is the fantastical claims that are made by the storage vendors, or by the non database-specific backup tools that say that they will back up and restore your Postgres database using volume snapshots. Perhap this means that people don't think they need to worry about the details.
I don't understand not monitoring your database backups, but so many people seem to put a cron job in place to perform backups and that's it.
It's just a checklist item - "we've implemented backups - done!"
The last time I heard someone say
"I’ve no idea when our last successful backup was taken."
was when a ticket was escalated to me
because a customer was dealing with a failed database recovery process.
I asked my colleagues to share their own experiences with me when I was preparing this talk,
and one of them said:
"Not realizing their cron backups are misconfigured which leads to WAL never being expired by pgBackRest which leads to archive_command failing which leads to primary running out of disk space which leads to Patroni failover which leads to the replicas running out of disk space too."
That was far too specific to have been a general reflection, and must have come from a point of pain. I can only assume that my colleague had just had to help a customer to recover from that particular situation when I asked.
"My backup server was in my failed datacenter, and now I can’t failover to my DR site."
or
"My backups were on my database server, and now I can't recover."
Is this just because people want to have everything in one place so it's neat and tidy?!
Database backups don't need to be kept on the database server or just in the primary datacenter
- there's cloud storage, network storage, different availability zones etc.
I've found sometimes this happens because organisations don't have the
internal processes in place that would make it easy to get the storage in place that's needed.
"We don’t need a database backup tool, we’ve got some custom shell scripts for that."
This is obviously another pain point. When I asked what people were doing with home-grown backup scripts, another colleague said:
"Modifying a string-and-sealingwax pg_dump cronjob to use pgbackrest instead, but keeping a redirect in the invocation so it overwrites the (dynamically created) location of the intended backup output with pgbackrest’s stdout first."
I almost wish I hadn't asked!
We have well established, fully featured, reliable, performant, open source, community created and maintained backup and recovery tools.
Users don't need to be writing their own custom shell scripts to do backup and recovery.
But some people still think that they'll do better themselves by creating home-grown backup scripts.
Federico Campoli alerted me to this one on LinkedIn, and he said that he sees a lot of these. Somebody, who is apparently a Postgres database administrator, shared a "useful hint" with other Postgres users about how you can write a script to "Automate Database Backpus". Federico replied explaining that that's not the best way to back up a Postgres database, and that in any case the the script saves the backup on the same machine which isn't helpful. I also responded to say that I'm actually really interested, that I would like to know why people are doing this instead of using one of the tools available, but the post was deleted shortly after we tried to interact, so we will never know.
Yup, people are still deleting WAL files.
I know the change of name from pg_xlog to pg_wal back in version 10
was so that people wouldn't see "log" and think that it was OK to delete,
and I can only assume that it had some impact, but I do know that people are still doing this.
I know that my colleagues have written blog posts about this, I know that
other companies have written blog posts about this,
I know that we frequently tell users "Do not do this", yet people still do.
When I gave the talk, I said I thought the the PostgreSQL documentation made it clear
that you shouldn't delete WAL files,
but having looked back afterwards to find a link and a quote, I realise that it's not so clear cut.
I tried searches with terms such as "delete WAL files" and "WAL directory full"
oth inside the PostgreSQL Documentation, and using a search engine, and nothing came up in the docs.
It feels like something we could easily improve.
Fortunately, though, the search engine results are full of all of the major Postgres players
saying "Don't delete WAL files."
Apart from adding a more prominent warning to the documentation,
I said (jokingly) that maybe we need to append DO_NOT_DELETE to the name of every WAL file.
Moving on to the application.
"The app developers said that each of our 50 app servers needs a thousand connections."
or:
"We kept getting connection errors so we set max_connections to 100K."
Lots of people are setting really, really high values of max_connections. Although it's a lot less of an issue than it used to be, it's still causing problems. I'm hypothesising, but I suspect that it's an education issue, especially with people coming from other database management systems; that we still need to explain to users how Postgres works, what the implications are if they set max_connections too high, and if they have too many concurrent connections.
Oh, I like it when people expect magic! I've heard this one many times. Users want Postgres to magically route read/write activity to their primary database and read only activity to their standby database without making any application changes. Unfortunately, unless you tell it, Postgres doesn't know in advance that a session or transaction is only planning to do reads.
OK, I confess that "I allocated a millicore to Postgres and it doesn’t seem to be performing very well" isn't an exact quote, but we do often see people allocating a tiny amount of resource to Postgres and being surprised that it struggles under load.
I had a conversation with some Postgres friends about this before presenting the talk.
We don't specify minimum resources for Postgres. There are good reasons for that;
partly because Postgres itself doesn't need much in the way of resources and will run almost anywhere.
You can install it, you can start it up, it will run.
The issue, of course, is once you start putting real life workloads through it.
It seems fairly obvious that your database isn't going to perform well if you only give it a millicore but, on the other hand, how do people know what they need to allocate?
There are so many moving parts that it's difficult to recommend specific resource allocations,
but even putting together a chart with some rules of thumb for some different types of system (with a lot of caveats) could be helpful.
There seems to be quite a lot of misunderstanding about how synchronous replication works. This is one that I get a lot of questions about, a lot of users being surprised at how things are acting, and not understanding how it should be configured or what the implications of the various settings are on performance, high availability, or durability.
We hear things like:
"I need synchronous replication but it mustn’t have any impact on my primary database."
I think that's fairly intuitive that if you implement synchronous replication, i.e. you tell the primary to "wait until the transaction's been written to the standby, and you've got confirmation that it's been written to the standby", that's going to add some extra time. The documentation does state that synchronous replication
"necessarily increases the response time for the requesting transaction.
The minimum wait time is the round-trip time between primary and standby."
Source: https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION
but it still seems to surprise people.
"I implemented synchronous replication and now I can’t write to my primary database."
Again, I think this is fairly intuitive. If you've asked the primary to wait until things have been written to your standby, then if something's gone wrong with that standby, writes to your primary are going to have to wait for you to fix whatever's gone wrong. But again people seem surprised about this.
On the other hand we'll hear things like:
"I implemented synchronous replication but my primary is still working even though I lost my standby."
usually when people are using Patroni, because by default Patroni prefers availability over durability.
The Patroni documentation does explain this default behavior, saying:
"the standby will signal the primary to release itself from synchronous standby duties before Postgres shutdown is initiated."
Source: https://patroni.readthedocs.io/en/latest/replication_modes.html#replication-modes
And the next paragraph of the documentation explains that you should use synchronous_mode_strict if you want to ensure that you stay in synchronous replication mode.
I actually find the synchronous replication section of the Postgres docs, and the Patroni documentation fairly clear, easy to understand and complete.
I don't know if that's because I've read them so many times, and I've used them so many times to help make sure that customers have got things set up in the right way.
Perhaps I've forgotten what it was like the first time (or 10) that I read them.
Maybe we need to ask people who aren't so familiar with it tell us what their initial impressions are when they read the documentation. It might not be clear to them.
Moving on:
"I want my Postgres database to behave exactly like my
<insert other RDMBS here> database."
OK, we actually know what most people mean there - it's usually:
"I want my Postgres database to behave exactly like my Oracle database."
A lot of Postgres newbies aren't database newbies. They know a lot about databases.
A lot of them, like me, have been working with Oracle for decades.
So they know how to manage a relational database system, but they don't know Postgres.
I find Postgres much easier to understand than I found Oracle,
and I find that there are plenty of places that I can go to get the information I need.
When I first started out, though, I did have to search around a bit to find resources that I could use.
I didn't want to start from from scratch and learn about databases,
I just wanted to quickly get up on to speed on what the differences were.
So there's definitely some of that, but there's also something else that I learnt about:
Functional Fixedness is
a cognitive bias that limits a person to use an object only in the way it's traditionally used, or only in the way that they're used to using something.
So if someone's used to using something like Oracle, then if they're presented with a RDMS they're going to keep trying to use it in that known, familiar way.
The slide above shows what's apparently a classic experiment to demonstrate functional fixedness.
People were given the set of items at the top; a book of matches, a box of tacks and a candle, and were asked to use the things they'd been given attach the candle to the wall.
Most people, when presented with things in this way, saw the box just as something that happened to be there as a container for the tacks,
so they tried to do things like melt the candle, and use the tacks to attach it directly to the wall.
The expected solution is the one shown in the picture: you attach the box to the wall with one of the tacks and put the candle in the box.
Apparently, when when people were presented with the box separately to the tacks, they were considerably more likely to come up with this solution.
This tells us that if people are used to seeing something in a particular way they will keep seeing it in that way;
they don't have the imagination to see it in a different way.
So maybe we need to help people to have the imagination to see a database as "not necessarily Oracle".
To summarise; we all know that Postgres is awesome and I'm definitely not here to suggest otherwise.
We have such a a great set of tools that goes with Postgres and in general the documentation is pretty good.
There are things we could probably all do to improve the documentation, by talking to people about what they need from it for example,
but we provide people with a fantastic system, good documentation and good tools that they can use to actually manage their databases.
But it's really easy to forget that not everybody that's working with Postgres is a database expert.
And as I mentioned earlier, people will blame Postgres if something goes wrong.
It's probably not Postgres' fault, it's probably not even the associated tool that they're using,
but if something goes wrong they will say "Oh, Postgres did this." and we don't want that.
None of us want people to be blaming Postgres, so whatever the reasons for things,
it's good for all of us if we can help people to get the most out of Postgres and continue to know that "it just works".
What does this all mean for us?
- We really need to think about communication with the end users of Postgres.
It has been fantastic for me over the last couple of years to go to app developer conferences
and talk to some of the people who absolutely love Postgres, but don't necessarily know anything about databases.
It's been fascinating to me, and it's given me a different perspective on things.
- We need to listen to what users are trying to do, find out what they already know, and try and meet them there.
Find out what they need in terms of training, what kind of tutorials or documentation they need to help them to learn things.
To me, and to most of the people in the audience for this talk, learning about databases is fascinating,
but it's not necessarily something that's taught to application developers, and it's not necessarily something that's interesting to them.
- Another thing that I found out from talking to people at developer events, is that they don't think that Postgres conferences are for them. If we want people to know more about how to work with databases, we need to try and encourage more people who aren't already Postgres experts to come along to the Postgres events and to interact with the Postgres community.
Finally, this presentation is based mainly on mine and my colleagues' experiences.
I would love people to get in touch and tell me their own experiences of the things they encounter that are being done "wrong".
I'd really like to improve the presentation and to give it another time with more information about which of these things are the most common,
and collate some suggestions of things we can do to avoid them.
Get in touch, or follow me for more database chat.
Mastodon: @karenhjex@mastodon.online
X/Twitter: @karenhjex
LinkedIn: https://www.linkedin.com/in/karenhjex/