Changelog

3.26.1 (2022-05-02)

  • Now depends on click-default-group-wheel, a pure Python wheel package. This means you can install and use this package with Pyodide, which can run Python entirely in your browser using WebAssembly. (#429)

    Try that out using the Pyodide REPL:

    >>> import micropip
    >>> await micropip.install("sqlite-utils")
    >>> import sqlite_utils
    >>> db = sqlite_utils.Database(memory=True)
    >>> list(db.query("select 3 * 5"))
    [{'3 * 5': 15}]
    

3.26 (2022-04-13)

3.25.1 (2022-03-11)

3.25 (2022-03-01)

3.24 (2022-02-15)

3.23 (2022-02-03)

This release introduces four new utility methods for working with SpatiaLite. Thanks, Chris Amico. (#385)

3.22.1 (2022-01-25)

3.22 (2022-01-11)

  • New CLI reference documentation page, listing the output of --help for every one of the CLI commands. (#383)

  • sqlite-utils rows now has --limit and --offset options for paginating through data. (#381)

  • sqlite-utils rows now has --where and -p options for filtering the table using a WHERE query, see Returning all rows in a table. (#382)

3.21 (2022-01-10)

CLI and Python library improvements to help run ANALYZE after creating indexes or inserting rows, to gain better performance from the SQLite query planner when it runs against indexes.

Three new CLI commands: create-database, analyze and bulk.

More details and examples can be found in the annotated release notes.

  • New sqlite-utils create-database command for creating new empty database files. (#348)

  • New Python methods for running ANALYZE against a database, table or index: db.analyze() and table.analyze(), see Optimizing index usage with ANALYZE. (#366)

  • New sqlite-utils analyze command for running ANALYZE using the CLI. (#379)

  • The create-index, insert and upsert commands now have a new --analyze option for running ANALYZE after the command has completed. (#379)

  • New sqlite-utils bulk command which can import records in the same way as sqlite-utils insert (from JSON, CSV or TSV) and use them to bulk execute a parametrized SQL query. (#375)

  • The CLI tool can now also be run using python -m sqlite_utils. (#368)

  • Using --fmt now implies --table, so you don’t need to pass both options. (#374)

  • The --convert function applied to rows can now modify the row in place. (#371)

  • The insert-files command supports two new columns: stem and suffix. (#372)

  • The --nl import option now ignores blank lines in the input. (#376)

  • Fixed bug where streaming input to the insert command with --batch-size 1 would appear to only commit after several rows had been ingested, due to unnecessary input buffering. (#364)

3.20 (2022-01-05)

  • sqlite-utils insert ... --lines to insert the lines from a file into a table with a single line column, see Inserting unstructured data with --lines and --text.

  • sqlite-utils insert ... --text to insert the contents of the file into a table with a single text column and a single row.

  • sqlite-utils insert ... --convert allows a Python function to be provided that will be used to convert each row that is being inserted into the database. See Applying conversions while inserting data, including details on special behavior when combined with --lines and --text. (#356)

  • sqlite-utils convert now accepts a code value of - to read code from standard input. (#353)

  • sqlite-utils convert also now accepts code that defines a named convert(value) function, see Converting data in columns.

  • db.supports_strict property showing if the database connection supports SQLite strict tables.

  • table.strict property (see .strict) indicating if the table uses strict mode. (#344)

  • Fixed bug where sqlite-utils upsert ... --detect-types ignored the --detect-types option. (#362)

3.19 (2021-11-20)

  • The table.lookup() method now accepts keyword arguments that match those on the underlying table.insert() method: foreign_keys=, column_order=, not_null=, defaults=, extracts=, conversions= and columns=. You can also now pass pk= to specify a different column name to use for the primary key. (#342)

3.18 (2021-11-14)

  • The table.lookup() method now has an optional second argument which can be used to populate columns only the first time the record is created, see Working with lookup tables. (#339)

  • sqlite-utils memory now has a --flatten option for flattening nested JSON objects into separate columns, consistent with sqlite-utils insert. (#332)

  • table.create_index(..., find_unique_name=True) parameter, which finds an available name for the created index even if the default name has already been taken. This means that index-foreign-keys will work even if one of the indexes it tries to create clashes with an existing index name. (#335)

  • Added py.typed to the module, so mypy should now correctly pick up the type annotations. Thanks, Andreas Longo. (#331)

  • Now depends on python-dateutil instead of depending on dateutils. Thanks, Denys Pavlov. (#324)

  • table.create() (see Explicitly creating a table) now handles dict, list and tuple types, mapping them to TEXT columns in SQLite so that they can be stored encoded as JSON. (#338)

  • Inserted data with square braces in the column names (for example a CSV file containing a item[price]) column now have the braces converted to underscores: item_price_. Previously such columns would be rejected with an error. (#329)

  • Now also tested against Python 3.10. (#330)

3.17.1 (2021-09-22)

3.17 (2021-08-24)

  • The sqlite-utils memory command has a new --analyze option, which runs the equivalent of the analyze-tables command directly against the in-memory database created from the incoming CSV or JSON data. (#320)

  • sqlite-utils insert-files now has the ability to insert file contents in to TEXT columns in addition to the default BLOB. Pass the --text option or use content_text as a column specifier. (#319)

3.16 (2021-08-18)

  • Type signatures added to more methods, including table.resolve_foreign_keys(), db.create_table_sql(), db.create_table() and table.create(). (#314)

  • New db.quote_fts(value) method, see Quoting characters for use in search - thanks, Mark Neumann. (#246)

  • table.search() now accepts an optional quote=True parameter. (#296)

  • CLI command sqlite-utils search now accepts a --quote option. (#296)

  • Fixed bug where --no-headers and --tsv options to sqlite-utils insert could not be used together. (#295)

  • Various small improvements to API reference documentation.

3.15.1 (2021-08-10)

  • Python library now includes type annotations on almost all of the methods, plus detailed docstrings describing each one. (#311)

  • New API reference documentation page, powered by those docstrings.

  • Fixed bug where .add_foreign_keys() failed to raise an error if called against a View. (#313)

  • Fixed bug where .delete_where() returned a [] instead of returning self if called against a non-existent table. (#315)

3.15 (2021-08-09)

  • sqlite-utils insert --flatten option for flattening nested JSON objects to create tables with column names like topkey_nestedkey. (#310)

  • Fixed several spelling mistakes in the documentation, spotted using codespell.

  • Errors that occur while using the sqlite-utils CLI tool now show the responsible SQL and query parameters, if possible. (#309)

3.14 (2021-08-02)

This release introduces the new sqlite-utils convert command (#251) and corresponding table.convert(…) Python method (#302). These tools can be used to apply a Python conversion function to one or more columns of a table, either updating the column in place or using transformed data from that column to populate one or more other columns.

This command-line example uses the Python standard library textwrap module to wrap the content of the content column in the articles table to 100 characters:

$ sqlite-utils convert content.db articles content \
    '"\n".join(textwrap.wrap(value, 100))' \
    --import=textwrap

The same operation in Python code looks like this:

import sqlite_utils, textwrap

db = sqlite_utils.Database("content.db")
db["articles"].convert("content", lambda v: "\n".join(textwrap.wrap(v, 100)))

See the full documentation for the sqlite-utils convert command and the table.convert(…) Python method for more details.

Also in this release:

  • The new table.count_where(...) method, for counting rows in a table that match a specific SQL WHERE clause. (#305)

  • New --silent option for the sqlite-utils insert-files command to hide the terminal progress bar, consistent with the --silent option for sqlite-utils convert. (#301)

3.13 (2021-07-24)

  • sqlite-utils schema my.db table1 table2 command now accepts optional table names. (#299)

  • sqlite-utils memory --help now describes the --schema option.

3.12 (2021-06-25)

  • New db.query(sql, params) method, which executes a SQL query and returns the results as an iterator over Python dictionaries. (#290)

  • This project now uses flake8 and has started to use mypy. (#291)

  • New documentation on contributing to this project. (#292)

3.11 (2021-06-20)

3.10 (2021-06-19)

This release introduces the sqlite-utils memory command, which can be used to load CSV or JSON data into a temporary in-memory database and run SQL queries (including joins across multiple files) directly against that data.

Also new: sqlite-utils insert --detect-types, sqlite-utils dump, table.use_rowid plus some smaller fixes.

sqlite-utils memory

This example of sqlite-utils memory retrieves information about the all of the repositories in the Dogsheep organization on GitHub using this JSON API, sorts them by their number of stars and outputs a table of the top five (using -t):

$ curl -s 'https://api.github.com/users/dogsheep/repos' \
  | sqlite-utils memory - '
      select full_name, forks_count, stargazers_count
      from stdin order by stargazers_count desc limit 5
    ' -t
full_name                            forks_count    stargazers_count
---------------------------------  -------------  ------------------
dogsheep/twitter-to-sqlite                    12                 225
dogsheep/github-to-sqlite                     14                 139
dogsheep/dogsheep-photos                       5                 116
dogsheep/dogsheep.github.io                    7                  90
dogsheep/healthkit-to-sqlite                   4                  85

The tool works against files on disk as well. This example joins data from two CSV files:

$ cat creatures.csv
species_id,name
1,Cleo
2,Bants
2,Dori
2,Azi
$ cat species.csv
id,species_name
1,Dog
2,Chicken
$ sqlite-utils memory species.csv creatures.csv '
  select * from creatures join species on creatures.species_id = species.id
'
[{"species_id": 1, "name": "Cleo", "id": 1, "species_name": "Dog"},
 {"species_id": 2, "name": "Bants", "id": 2, "species_name": "Chicken"},
 {"species_id": 2, "name": "Dori", "id": 2, "species_name": "Chicken"},
 {"species_id": 2, "name": "Azi", "id": 2, "species_name": "Chicken"}]

Here the species.csv file becomes the species table, the creatures.csv file becomes the creatures table and the output is JSON, the default output format.

You can also use the --attach option to attach existing SQLite database files to the in-memory database, in order to join data from CSV or JSON directly against your existing tables.

Full documentation of this new feature is available in Querying data directly using an in-memory database. (#272)

sqlite-utils insert --detect-types

The sqlite-utils insert command can be used to insert data from JSON, CSV or TSV files into a SQLite database file. The new --detect-types option (shortcut -d), when used in conjunction with a CSV or TSV import, will automatically detect if columns in the file are integers or floating point numbers as opposed to treating everything as a text column and create the new table with the corresponding schema. See Inserting CSV or TSV data for details. (#282)

Other changes

  • Bug fix: table.transform(), when run against a table without explicit primary keys, would incorrectly create a new version of the table with an explicit primary key column called rowid. (#284)

  • New table.use_rowid introspection property, see .use_rowid. (#285)

  • The new sqlite-utils dump file.db command outputs a SQL dump that can be used to recreate a database. (#274)

  • -h now works as a shortcut for --help, thanks Loren McIntyre. (#276)

  • Now using pytest-cov and Codecov to track test coverage - currently at 96%. (#275)

  • SQL errors that occur when using sqlite-utils query are now displayed as CLI errors.

3.9.1 (2021-06-12)

  • Fixed bug when using table.upsert_all() to create a table with only a single column that is treated as the primary key. (#271)

3.9 (2021-06-11)

3.8 (2021-06-02)

  • New sqlite-utils indexes command to list indexes in a database, see Listing indexes. (#263)

  • table.xindexes introspection property returning more details about that table’s indexes, see .xindexes. (#261)

3.7 (2021-05-28)

  • New table.pks_and_rows_where() method returning (primary_key, row_dictionary) tuples - see Listing rows with their primary keys. (#240)

  • Fixed bug with table.add_foreign_key() against columns containing spaces. (#238)

  • table_or_view.drop(ignore=True) option for avoiding errors if the table or view does not exist. (#237)

  • sqlite-utils drop-view --ignore and sqlite-utils drop-table --ignore options. (#237)

  • Fixed a bug with inserts of nested JSON containing non-ascii strings - thanks, Dylan Wu. (#257)

  • Suggest --alter if an error occurs caused by a missing column. (#259)

  • Support creating indexes with columns in descending order, see API documentation and CLI documentation. (#260)

  • Correctly handle CSV files that start with a UTF-8 BOM. (#250)

3.6 (2021-02-18)

This release adds the ability to execute queries joining data from more than one database file - similar to the cross database querying feature introduced in Datasette 0.55.

3.5 (2021-02-14)

  • sqlite-utils insert --sniff option for detecting the delimiter and quote character used by a CSV file, see Alternative delimiters and quote characters. (#230)

  • The table.rows_where(), table.search() and table.search_sql() methods all now take optional offset= and limit= arguments. (#231)

  • New --no-headers option for sqlite-utils insert --csv to handle CSV files that are missing the header row, see CSV files without a header row. (#228)

  • Fixed bug where inserting data with extra columns in subsequent chunks would throw an error. Thanks @nieuwenhoven for the fix. (#234)

  • Fixed bug importing CSV files with columns containing more than 128KB of data. (#229)

  • Test suite now runs in CI against Ubuntu, macOS and Windows. Thanks @nieuwenhoven for the Windows test fixes. (#232)

3.4.1 (2021-02-05)

  • Fixed a code import bug that slipped in to 3.4. (#226)

3.4 (2021-02-05)

3.3 (2021-01-17)

3.2.1 (2021-01-12)

  • Fixed a bug where .add_missing_columns() failed to take case insensitive column names into account. (#221)

3.2 (2021-01-03)

This release introduces a new mechanism for speeding up count(*) queries using cached table counts, stored in a _counts table and updated by triggers. This mechanism is described in Cached table counts using triggers, and can be enabled using Python API methods or the new enable-counts CLI command. (#212)

  • table.enable_counts() method for enabling these triggers on a specific table.

  • db.enable_counts() method for enabling triggers on every table in the database. (#213)

  • New sqlite-utils enable-counts my.db command for enabling counts on all or specific tables, see Enabling cached counts. (#214)

  • New sqlite-utils triggers command for listing the triggers defined for a database or specific tables, see Listing triggers. (#218)

  • New db.use_counts_table property which, if True, causes table.count to read from the _counts table. (#215)

  • table.has_counts_triggers property revealing if a table has been configured with the new _counts database triggers.

  • db.reset_counts() method and sqlite-utils reset-counts command for resetting the values in the _counts table. (#219)

  • The previously undocumented db.escape() method has been renamed to db.quote() and is now covered by the documentation: Quoting strings for use in SQL. (#217)

  • New table.triggers_dict and db.triggers_dict introspection properties. (#211, #216)

  • sqlite-utils insert now shows a more useful error message for invalid JSON. (#206)

3.1.1 (2021-01-01)

  • Fixed failing test caused by optimize sometimes creating larger database files. (#209)

  • Documentation now lives on https://sqlite-utils.datasette.io/

  • README now includes brew install sqlite-utils installation method.

3.1 (2020-12-12)

  • New command: sqlite-utils analyze-tables my.db outputs useful information about the table columns in the database, such as the number of distinct values and how many rows are null. See Analyzing tables for documentation. (#207)

  • New table.analyze_column(column) Python method used by the analyze-tables command - see Analyzing a column.

  • The table.update() method now correctly handles values that should be stored as JSON. Thanks, Andreas Madsack. (#204)

3.0 (2020-11-08)

This release introduces a new sqlite-utils search command for searching tables, see Executing searches. (#192)

The table.search() method has been redesigned, see Searching with table.search(). (#197)

The release includes minor backwards-incompatible changes, hence the version bump to 3.0. Those changes, which should not affect most users, are:

  • The -c shortcut option for outputting CSV is no longer available. The full --csv option is required instead.

  • The -f shortcut for --fmt has also been removed - use --fmt.

  • The table.search() method now defaults to sorting by relevance, not sorting by rowid. (#198)

  • The table.search() method now returns a generator over a list of Python dictionaries. It previously returned a list of tuples.

Also in this release:

  • The query, tables, rows and search CLI commands now accept a new --tsv option which outputs the results in TSV. (#193)

  • A new table.virtual_table_using property reveals if a table is a virtual table, and returns the upper case type of virtual table (e.g. FTS4 or FTS5) if it is. It returns None if the table is not a virtual table. (#196)

  • The new table.search_sql() method returns the SQL for searching a table, see Building SQL queries with table.search_sql().

  • sqlite-utils rows now accepts multiple optional -c parameters specifying the columns to return. (#200)

Changes since the 3.0a0 alpha release:

  • The sqlite-utils search command now defaults to returning every result, unless you add a --limit 20 option.

  • The sqlite-utils search -c and table.search(columns=[]) options are now fully respected. (#201)

2.23 (2020-10-28)

  • table.m2m(other_table, records) method now takes any iterable, not just a list or tuple. Thanks, Adam Wolf. (#189)

  • sqlite-utils insert now displays a progress bar for CSV or TSV imports. (#173)

  • New @db.register_function(deterministic=True) option for registering deterministic SQLite functions in Python 3.8 or higher. (#191)

2.22 (2020-10-16)

  • New --encoding option for processing CSV and TSV files that use a non-utf-8 encoding, for both the insert and update commands. (#182)

  • The --load-extension option is now available to many more commands. (#137)

  • --load-extension=spatialite can be used to load SpatiaLite from common installation locations, if it is available. (#136)

  • Tests now also run against Python 3.9. (#184)

  • Passing pk=["id"] now has the same effect as passing pk="id". (#181)

2.21 (2020-09-24)

  • table.extract() and sqlite-utils extract now apply much, much faster - one example operation reduced from twelve minutes to just four seconds! (#172)

  • sqlite-utils extract no longer shows a progress bar, because it’s fast enough not to need one.

  • New column_order= option for table.transform() which can be used to alter the order of columns in a table. (#175)

  • sqlite-utils transform --column-order= option (with a -o shortcut) for changing column order. (#176)

  • The table.transform(drop_foreign_keys=) parameter and the sqlite-utils transform --drop-foreign-key option have changed. They now accept just the name of the column rather than requiring all three of the column, other table and other column. This is technically a backwards-incompatible change but I chose not to bump the major version number because the transform feature is so new. (#177)

  • The table .disable_fts(), .rebuild_fts(), .delete(), .delete_where() and .add_missing_columns() methods all now return self, which means they can be chained together with other table operations.

2.20 (2020-09-22)

This release introduces two key new capabilities: transform (#114) and extract (#42).

Transform

SQLite’s ALTER TABLE has several documented limitations. The table.transform() Python method and sqlite-utils transform CLI command work around these limitations using a pattern where a new table with the desired structure is created, data is copied over to it and the old table is then dropped and replaced by the new one.

You can use these tools to change column types, rename columns, drop columns, add and remove NOT NULL and defaults, remove foreign key constraints and more. See the transforming tables (CLI) and transforming tables (Python library) documentation for full details of how to use them.

Extract

Sometimes a database table - especially one imported from a CSV file - will contain duplicate data. A Trees table may include a Species column with only a few dozen unique values, when the table itself contains thousands of rows.

The table.extract() method and sqlite-utils extract commands can extract a column - or multiple columns - out into a separate lookup table, and set up a foreign key relationship from the original table.

The Python library extract() documentation describes how extraction works in detail, and Extracting columns into a separate table in the CLI documentation includes a detailed example.

Other changes

  • The @db.register_function decorator can be used to quickly register Python functions as custom SQL functions, see Registering custom SQL functions. (#162)

  • The table.rows_where() method now accepts an optional select= argument for specifying which columns should be selected, see Listing rows.

2.19 (2020-09-20)

  • New sqlite-utils add-foreign-keys command for Adding multiple foreign keys at once. (#157)

  • New table.enable_fts(..., replace=True) argument for replacing an existing FTS table with a new configuration. (#160)

  • New table.add_foreign_key(..., ignore=True) argument for ignoring a foreign key if it already exists. (#112)

2.18 (2020-09-08)

  • table.rebuild_fts() method for rebuilding a FTS index, see Rebuilding a full-text search table. (#155)

  • sqlite-utils rebuild-fts data.db command for rebuilding FTS indexes across all tables, or just specific tables. (#155)

  • table.optimize() method no longer deletes junk rows from the *_fts_docsize table. This was added in 2.17 but it turns out running table.rebuild_fts() is a better solution to this problem.

  • Fixed a bug where rows with additional columns that are inserted after the first batch of records could cause an error due to breaking SQLite’s maximum number of parameters. Thanks, Simon Wiles. (#145)

2.17 (2020-09-07)

This release handles a bug where replacing rows in FTS tables could result in growing numbers of unnecessary rows in the associated *_fts_docsize table. (#149)

  • PRAGMA recursive_triggers=on by default for all connections. You can turn it off with Database(recursive_triggers=False). (#152)

  • table.optimize() method now deletes unnecessary rows from the *_fts_docsize table. (#153)

  • New tracer method for tracking underlying SQL queries, see Tracing queries. (#150)

  • Neater indentation for schema SQL. (#148)

  • Documentation for sqlite_utils.AlterError exception thrown by in add_foreign_keys().

2.16.1 (2020-08-28)

  • insert_all(..., alter=True) now works for columns introduced after the first 100 records. Thanks, Simon Wiles! (#139)

  • Continuous Integration is now powered by GitHub Actions. (#143)

2.16 (2020-08-21)

  • --load-extension option for sqlite-utils query for loading SQLite extensions. (#134)

  • New sqlite_utils.utils.find_spatialite() function for finding SpatiaLite in common locations. (#135)

2.15.1 (2020-08-12)

  • Now available as a sdist package on PyPI in addition to a wheel. (#133)

2.15 (2020-08-10)

  • New db.enable_wal() and db.disable_wal() methods for enabling and disabling Write-Ahead Logging for a database file - see WAL mode in the Python API documentation.

  • Also sqlite-utils enable-wal file.db and sqlite-utils disable-wal file.db commands for doing the same thing on the command-line, see WAL mode (CLI). (#132)

2.14.1 (2020-08-05)

  • Documentation improvements.

2.14 (2020-08-01)

  • The insert-files command can now read from standard input: cat dog.jpg | sqlite-utils insert-files dogs.db pics - --name=dog.jpg. (#127)

  • You can now specify a full-text search tokenizer using the new tokenize= parameter to enable_fts(). This means you can enable Porter stemming on a table by running db["articles"].enable_fts(["headline", "body"], tokenize="porter"). (#130)

  • You can also set a custom tokenizer using the sqlite-utils enable-fts CLI command, via the new --tokenize option.

2.13 (2020-07-29)

  • memoryview and uuid.UUID objects are now supported. memoryview objects will be stored using BLOB and uuid.UUID objects will be stored using TEXT. (#128)

2.12 (2020-07-27)

The theme of this release is better tools for working with binary data. The new insert-files command can be used to insert binary files directly into a database table, and other commands have been improved with better support for BLOB columns.

  • sqlite-utils insert-files my.db gifs *.gif can now insert the contents of files into a specified table. The columns in the table can be customized to include different pieces of metadata derived from the files. See Inserting data from files. (#122)

  • --raw option to sqlite-utils query - for outputting just a single raw column value - see Returning raw data, such as binary content. (#123)

  • JSON output now encodes BLOB values as special base64 objects - see Returning JSON. (#125)

  • The same format of JSON base64 objects can now be used to insert binary data - see Inserting JSON data. (#126)

  • The sqlite-utils query command can now accept named parameters, e.g. sqlite-utils :memory: "select :num * :num2" -p num 5 -p num2 6 - see Returning JSON. (#124)

2.11 (2020-07-08)

  • New --truncate option to sqlite-utils insert, and truncate=True argument to .insert_all(). Thanks, Thomas Sibley. (#118)

  • The sqlite-utils query command now runs updates in a transaction. Thanks, Thomas Sibley. (#120)

2.10.1 (2020-06-23)

  • Added documentation for the table.pks introspection property. (#116)

2.10 (2020-06-12)

  • The sqlite-utils command now supports UPDATE/INSERT/DELETE in addition to SELECT. (#115)

2.9.1 (2020-05-11)

2.9 (2020-05-10)

  • New sqlite-utils drop-table command, see Dropping tables. (#111)

  • New sqlite-utils drop-view command, see Dropping views.

  • Python decimal.Decimal objects are now stored as FLOAT. (#110)

2.8 (2020-05-03)

2.7.2 (2020-05-02)

  • db.create_view(...) now has additional parameters ignore=True or replace=True, see Creating views. (#106)

2.7.1 (2020-05-01)

  • New sqlite-utils views my.db command for listing views in a database, see Listing views. (#105)

  • sqlite-utils tables (and views) has a new --schema option which outputs the table/view schema, see Listing tables. (#104)

  • Nested structures containing invalid JSON values (e.g. Python bytestrings) are now serialized using repr() instead of throwing an error. (#102)

2.7 (2020-04-17)

  • New columns= argument for the .insert(), .insert_all(), .upsert() and .upsert_all() methods, for over-riding the auto-detected types for columns and specifying additional columns that should be added when the table is created. See Custom column order and column types. (#100)

2.6 (2020-04-15)

  • New table.rows_where(..., order_by="age desc") argument, see Listing rows. (#76)

2.5 (2020-04-12)

  • Panda’s Timestamp is now stored as a SQLite TEXT column. Thanks, b0b5h4rp13! (#96)

  • table.last_pk is now only available for inserts or upserts of a single record. (#98)

  • New Database(filepath, recreate=True) parameter for deleting and recreating the database. (#97)

2.4.4 (2020-03-23)

  • Fixed bug where columns with only null values were not correctly created. (#95)

2.4.3 (2020-03-23)

  • Column type suggestion code is no longer confused by null values. (#94)

2.4.2 (2020-03-14)

  • table.column_dicts now works with all column types - previously it would throw errors on types other than TEXT, BLOB, INTEGER or FLOAT. (#92)

  • Documentation for NotFoundError thrown by table.get(pk) - see Retrieving a specific record.

2.4.1 (2020-03-01)

  • table.enable_fts() now works with columns that contain spaces. (#90)

2.4 (2020-02-26)

  • table.disable_fts() can now be used to remove FTS tables and triggers that were created using table.enable_fts(...). (#88)

  • The sqlite-utils disable-fts command can be used to remove FTS tables and triggers from the command-line. (#88)

  • Trying to create table columns with square braces ([ or ]) in the name now raises an error. (#86)

  • Subclasses of dict, list and tuple are now detected as needing a JSON column. (#87)

2.3.1 (2020-02-10)

table.create_index() now works for columns that contain spaces. (#85)

2.3 (2020-02-08)

table.exists() is now a method, not a property. This was not a documented part of the API before so I’m considering this a non-breaking change. (#83)

2.2.1 (2020-02-06)

Fixed a bug where .upsert(..., hash_id="pk") threw an error (#84).

2.2 (2020-02-01)

New feature: sqlite_utils.suggest_column_types([records]) returns the suggested column types for a list of records. See Suggesting column types. (#81).

This replaces the undocumented table.detect_column_types() method.

2.1 (2020-01-30)

New feature: conversions={...} can be passed to the .insert() family of functions to specify SQL conversions that should be applied to values that are being inserted or updated. See Converting column values using SQL functions . (#77).

2.0.1 (2020-01-05)

The .upsert() and .upsert_all() methods now raise a sqlite_utils.db.PrimaryKeyRequired exception if you call them without specifying the primary key column using pk= (#73).

2.0 (2019-12-29)

This release changes the behaviour of upsert. It’s a breaking change, hence 2.0.

The upsert command-line utility and the .upsert() and .upsert_all() Python API methods have had their behaviour altered. They used to completely replace the affected records: now, they update the specified values on existing records but leave other columns unaffected.

See Upserting data using the Python API and Upserting data using the CLI for full details.

If you want the old behaviour - where records were completely replaced - you can use $ sqlite-utils insert ... --replace on the command-line and .insert(..., replace=True) and .insert_all(..., replace=True) in the Python API. See Insert-replacing data using the Python API and Insert-replacing data using the CLI for more.

For full background on this change, see issue #66.

1.12.1 (2019-11-06)

  • Fixed error thrown when .insert_all() and .upsert_all() were called with empty lists (#52)

1.12 (2019-11-04)

Python library utilities for deleting records (#62)

1.11 (2019-09-02)

Option to create triggers to automatically keep FTS tables up-to-date with newly inserted, updated and deleted records. Thanks, Amjith Ramanujam! (#57)

1.10 (2019-08-23)

Ability to introspect and run queries against views (#54)

  • db.view_names() method and and db.views property

  • Separate View and Table classes, both subclassing new Queryable class

  • view.drop() method

See Listing views.

1.9 (2019-08-04)

1.8 (2019-07-28)

1.7.1 (2019-07-28)

  • Fixed bug where inserting records with 11 columns in a batch of 100 triggered a “too many SQL variables” error (#50)

  • Documentation and tests for table.drop() method: Dropping a table or view

1.7 (2019-07-24)

Support for lookup tables.

1.6 (2019-07-18)

  • sqlite-utils insert can now accept TSV data via the new --tsv option (#41)

1.5 (2019-07-14)

  • Support for compound primary keys (#36)

    • Configure these using the CLI tool by passing --pk multiple times

    • In Python, pass a tuple of columns to the pk=(..., ...) argument: Compound primary keys

  • New table.get() method for retrieving a record by its primary key: Retrieving a specific record (#39)

1.4.1 (2019-07-14)

1.4 (2019-06-30)

  • Added sqlite-utils index-foreign-keys command (docs) and db.index_foreign_keys() method (docs) (#33)

1.3 (2019-06-28)

1.2.2 (2019-06-25)

  • Fixed bug where datetime.time was not being handled correctly

1.2.1 (2019-06-20)

  • Check the column exists before attempting to add a foreign key (#29)

1.2 (2019-06-12)

1.1 (2019-05-28)

1.0.1 (2019-05-27)

  • sqlite-utils rows data.db table --json-cols - fixed bug where --json-cols was not obeyed

1.0 (2019-05-24)

0.14 (2019-02-24)

  • Ability to create unique indexes: db["mytable"].create_index(["name"], unique=True)

  • db["mytable"].create_index(["name"], if_not_exists=True)

  • $ sqlite-utils create-index mydb.db mytable col1 [col2...], see Creating indexes

  • table.add_column(name, type) method, see Adding columns

  • $ sqlite-utils add-column mydb.db mytable nameofcolumn, see Adding columns (CLI)

  • db["books"].add_foreign_key("author_id", "authors", "id"), see Adding foreign key constraints

  • $ sqlite-utils add-foreign-key books.db books author_id authors id, see Adding foreign key constraints (CLI)

  • Improved (but backwards-incompatible) foreign_keys= argument to various methods, see Specifying foreign keys

0.13 (2019-02-23)

  • New --table and --fmt options can be used to output query results in a variety of visual table formats, see Table-formatted output

  • New hash_id= argument can now be used for Setting an ID based on the hash of the row contents

  • Can now derive correct column types for numpy int, uint and float values

  • table.last_id has been renamed to table.last_rowid

  • table.last_pk now contains the last inserted primary key, if pk= was specified

  • Prettier indentation in the CREATE TABLE generated schemas

0.12 (2019-02-22)

  • Added db[table].rows iterator - see Listing rows

  • Replaced sqlite-utils json and sqlite-utils csv with a new default subcommand called sqlite-utils query which defaults to JSON and takes formatting options --nl, --csv and --no-headers - see Returning JSON and Returning CSV or TSV

  • New sqlite-utils rows data.db name-of-table command, see Returning all rows in a table

  • sqlite-utils table command now takes options --counts and --columns plus the standard output format options, see Listing tables

0.11 (2019-02-07)

New commands for enabling FTS against a table and columns:

sqlite-utils enable-fts db.db mytable col1 col2

See Configuring full-text search.

0.10 (2019-02-06)

Handle datetime.date and datetime.time values.

New option for efficiently inserting rows from a CSV:

sqlite-utils insert db.db foo - --csv

0.9 (2019-01-27)

Improved support for newline-delimited JSON.

sqlite-utils insert has two new command-line options:

  • --nl means “expect newline-delimited JSON”. This is an extremely efficient way of loading in large amounts of data, especially if you pipe it into standard input.

  • --batch-size=1000 lets you increase the batch size (default is 100). A commit will be issued every X records. This also control how many initial records are considered when detecting the desired SQL table schema for the data.

In the Python API, the table.insert_all(...) method can now accept a generator as well as a list of objects. This will be efficiently used to populate the table no matter how many records are produced by the generator.

The Database() constructor can now accept a pathlib.Path object in addition to a string or an existing SQLite connection object.

0.8 (2019-01-25)

Two new commands: sqlite-utils csv and sqlite-utils json

These commands execute a SQL query and return the results as CSV or JSON. See Returning CSV or TSV and Returning JSON for more details.

$ sqlite-utils json --help
Usage: sqlite-utils json [OPTIONS] PATH SQL

  Execute SQL query and return the results as JSON

Options:
  --nl      Output newline-delimited JSON
  --arrays  Output rows as arrays instead of objects
  --help    Show this message and exit.

$ sqlite-utils csv --help
Usage: sqlite-utils csv [OPTIONS] PATH SQL

  Execute SQL query and return the results as CSV

Options:
  --no-headers  Exclude headers from CSV output
  --help        Show this message and exit.

0.7 (2019-01-24)

This release implements the sqlite-utils command-line tool with a number of useful subcommands.

  • sqlite-utils tables demo.db lists the tables in the database

  • sqlite-utils tables demo.db --fts4 shows just the FTS4 tables

  • sqlite-utils tables demo.db --fts5 shows just the FTS5 tables

  • sqlite-utils vacuum demo.db runs VACUUM against the database

  • sqlite-utils optimize demo.db runs OPTIMIZE against all FTS tables, then VACUUM

  • sqlite-utils optimize demo.db --no-vacuum runs OPTIMIZE but skips VACUUM

The two most useful subcommands are upsert and insert, which allow you to ingest JSON files with one or more records in them, creating the corresponding table with the correct columns if it does not already exist. See Inserting JSON data for more details.

  • sqlite-utils insert demo.db dogs dogs.json --pk=id inserts new records from dogs.json into the dogs table

  • sqlite-utils upsert demo.db dogs dogs.json --pk=id upserts records, replacing any records with duplicate primary keys

One backwards incompatible change: the db["table"].table_names property is now a method:

  • db["table"].table_names() returns a list of table names

  • db["table"].table_names(fts4=True) returns a list of just the FTS4 tables

  • db["table"].table_names(fts5=True) returns a list of just the FTS5 tables

A few other changes:

  • Plenty of updated documentation, including full coverage of the new command-line tool

  • Allow column names to be reserved words (use correct SQL escaping)

  • Added automatic column support for bytes and datetime.datetime

0.6 (2018-08-12)

  • .enable_fts() now takes optional argument fts_version, defaults to FTS5. Use FTS4 if the version of SQLite bundled with your Python does not support FTS5

  • New optional column_order= argument to .insert() and friends for providing a partial or full desired order of the columns when a database table is created

  • New documentation for .insert_all() and .upsert() and .upsert_all()

0.5 (2018-08-05)

  • db.tables and db.table_names introspection properties

  • db.indexes property for introspecting indexes

  • table.create_index(columns, index_name) method

  • db.create_view(name, sql) method

  • Table methods can now be chained, plus added table.last_id for accessing the last inserted row ID

0.4 (2018-07-31)

  • enable_fts(), populate_fts() and search() table methods