Categorical claims are often the source of faulty statements. "Don't test with SQLLite [sic] when you use Postgres in Production" by Robellard is a fantastic example. I actually agree with a variant of this statement: "If you need high levels of assurance, don't test with SQLite alone when you use Postgres in production."
Robellard bases his claim on several points, noting that "SQLite has different SQL semantics than Postgres," "SQLite has different bugs than Postgres," and "Postgres has way more features that SQLite." He has a couple more points, but all of these largely amount to a discrepancy between SQLite and Postgres, or between one Postgres version and another, leading to a defect. These points are a genuine concern, but his claim relies on using exactly one database back-end for testing, and exactly one risk profile for various applications.
As a quick diversion, I am not using the common definition of risk which is synonymous with chance. I am using a more stringent definition: "the effect of uncertainty on objectives" as specified in ISO Guide 73:2009. This definition often requires an assessment of both the impact and likelihood of some form of scenario to obtain a fuller picture of an "effect."
If the risk posed by defects caused by an SQLite-Postgres discrepancy is too high, then you'll likely want use Postgres as part of your testing strategy. If the risk posed is sufficiently low, then SQLite alone may be appropriate. These are predicated on the risk posed by defects, and the organisational appetite for risk.
A testing strategy comprising several different testing methodologies can often be thought of as a filter of several layers. Different layers are variously better or worse at surfacing different types of defects. Some are more likely to surface defects within components, and others are better at locating defects in the interactions between components. Other "layers" might be useful for catching other classes of defects. Each layer reduces the likelihood of a defect reaching production, which reduces the risk that defects pose. Each layer also has a cost associated with writing and maintaining that layer.
It's quite common for different layers to be run at different times. For instance, mock-based unit tests might be run very frequently by developers. This provides the developers with very quick feedback on their work. Integration tests backed by an in-memory database might be run prior to committing. These take a little longer to run and so might get run less often, but still catch most problems caused by erroneous component interactions. A continuous integration (CI) server might run integration tests backed by Postgres, and slower UI tests periodically. Finally, penetration tests might be conducted on a yearly or six-monthly basis.
This sort of process aims to allow developers the flexibility to work with confidence by providing quick feedback. However, it also provides heavier-weight checking for the increased levels of assurance required for the risk-averse organisation. An organisation with a greater appetite for risk may remove one or more of those layers, such as in-memory integration tests, to speed development. This saves them money and time but increases their exposure to risk posed by defects.
SQLite is just a tool which may be used as part of one's testing strategy. Declaring "Don't test with SQLLite [sic] when you use Postgres in Production" ignores how it may be usefully applied to reduce risk in a project. In many cases SQLite is entirely appropriate, as the situation simply does not require high levels of assurance. In other cases, it may form part of a more holistic approach along side testing against other database backends, or be removed entirely.
Not every organisation is NASA, and not every project handles secrets of national import. Most failures do not kill people. An honest assessment of the risks would ideally drive the selection of the testing strategy. Often-times this selection will be balanced against other concerns, such as time-to-market and budget. There is no silver bullet. A practical, well-rounded solution is often most appropriate.
Showing posts with label sqlite. Show all posts
Showing posts with label sqlite. Show all posts
Saturday, 8 August 2015
Saturday, 6 December 2014
A Simple Mail Merge Application
My other half has just finished writing their first full-length novel. As such, they'd like to send it off to agents.
My first thought was OpenOffice Base. Have her enter the agent details into a table, and use OpenOffice Writer's mail merge facility. This however, did not work, since Writer's mail merge facility lacked the all-important attachments functionality.
If OpenOffice had this functionality, we'd have been up and running in about 15 minutes. I'm surprised it doesn't, it's the ultimate way to automate the job search, surely an unemployed programmer would've provided the functionality at some point...
But, leaving that by the way side, I wasn't about to give up on OpenOffice just yet. I know that OpenOffice Base's files are just cunningly zipped HSQL databases, with some metadata surrounding it.
So, I thought I'd unzip the OpenOffice Base file and have a small Java application read the HSQL database, put the results through the Velocity template engine and send off the email.
This would've involved sneaker-netting the ODB file back and forth between my machine and my partner's, but that seemed ok. They'd enter many agents in during the day, and I'd "send" them all over night. No biggy.
This was also a bust. Once my Java application with it's all-mighty HSQL JDBC jar had touched the database, it seemed to taint it. I think it bumped a version field in the database. This meant that OpenOffice Base refused to open it after even one round with my Java program.
So, plan C. SQLite is an amazing embedded database. Far faster and nicer than HSQL -- it even comes with a neat little command line interface.
I set up some test data in an SQLite database and pointed my Java program at it. Success!
So then I told OpenOffice Base to look at the file, so my partner could enter some data. Failure! OpenOffice Base had an issue with (I think) the scrolling modes available. So that was right out, wouldn't even show the data in the OpenOffice interface. Sad times.
Plan D. Remember, you've always got to have at least 3 fall-back plans when developing software, otherwise nothing will ever work.
PostgreSQL to the rescue! I setup Postgres on my machine and opened a port for it. On my partner's machine, I tested that they could connect their OpenOffice Base to my Postgres. I then tested dropping some test data in the Postgres database and trying my Java program, configured to use Postgres... Success!!
Now all I need to do is figure out how to send MIME multi-part HTML emails correctly ... ugh.
Anyway, this has been a day or so worth of work on my part. I suspect I'll have another half-day to get HTML emails working correctly, and then I'll be sorted. Hopefully it'll enable my partner to effectively reach a whole host of agents without writing out the same damn cover letter, attaching various PDFs, and DOC files, etc. over and over.
Once this is all wrapped up, I may open source it. I'll need to tidy the code, add tests and documentation, but it may be of use to someone.
The moral of the story is that HSQL is a difficult database to work with, SQLite is always awesome but somethings don't support it, and PostgreSQL is the best RDBMS since sliced bread.
My first thought was OpenOffice Base. Have her enter the agent details into a table, and use OpenOffice Writer's mail merge facility. This however, did not work, since Writer's mail merge facility lacked the all-important attachments functionality.
If OpenOffice had this functionality, we'd have been up and running in about 15 minutes. I'm surprised it doesn't, it's the ultimate way to automate the job search, surely an unemployed programmer would've provided the functionality at some point...
But, leaving that by the way side, I wasn't about to give up on OpenOffice just yet. I know that OpenOffice Base's files are just cunningly zipped HSQL databases, with some metadata surrounding it.
So, I thought I'd unzip the OpenOffice Base file and have a small Java application read the HSQL database, put the results through the Velocity template engine and send off the email.
This would've involved sneaker-netting the ODB file back and forth between my machine and my partner's, but that seemed ok. They'd enter many agents in during the day, and I'd "send" them all over night. No biggy.
This was also a bust. Once my Java application with it's all-mighty HSQL JDBC jar had touched the database, it seemed to taint it. I think it bumped a version field in the database. This meant that OpenOffice Base refused to open it after even one round with my Java program.
So, plan C. SQLite is an amazing embedded database. Far faster and nicer than HSQL -- it even comes with a neat little command line interface.
I set up some test data in an SQLite database and pointed my Java program at it. Success!
So then I told OpenOffice Base to look at the file, so my partner could enter some data. Failure! OpenOffice Base had an issue with (I think) the scrolling modes available. So that was right out, wouldn't even show the data in the OpenOffice interface. Sad times.
Plan D. Remember, you've always got to have at least 3 fall-back plans when developing software, otherwise nothing will ever work.
PostgreSQL to the rescue! I setup Postgres on my machine and opened a port for it. On my partner's machine, I tested that they could connect their OpenOffice Base to my Postgres. I then tested dropping some test data in the Postgres database and trying my Java program, configured to use Postgres... Success!!
Now all I need to do is figure out how to send MIME multi-part HTML emails correctly ... ugh.
Anyway, this has been a day or so worth of work on my part. I suspect I'll have another half-day to get HTML emails working correctly, and then I'll be sorted. Hopefully it'll enable my partner to effectively reach a whole host of agents without writing out the same damn cover letter, attaching various PDFs, and DOC files, etc. over and over.
Once this is all wrapped up, I may open source it. I'll need to tidy the code, add tests and documentation, but it may be of use to someone.
The moral of the story is that HSQL is a difficult database to work with, SQLite is always awesome but somethings don't support it, and PostgreSQL is the best RDBMS since sliced bread.
Subscribe to:
Posts (Atom)