NEWS.md
postgresImportLargeObject() for importing large objects from client side (@toppyy, #376, #472).dbWriteTable() correctly handles name clashes between temporary and permanent tables (#402, #431).dbQuoteIdentifier() for Id() objects to no longer rely on names (#460).dbQuoteIdentifier() (@dpprdan, #263, #372).dbListObjects() only allows Id() objects as prefix argument (@dpprdan, #390).dbQuoteLiteral() correctly quotes 64-bit integers from the bit64 package (of class "integer64") (@karawoo, #435, #436).
Breaking change: dbListObjects() only allows Id() objects as prefix argument (@dpprdan, #390).
Upgrade boost to 1.81.0-1 to fix sprintf warnings (#417).
One-click setup for https://gitpod.io (@Antonov548, #407).
Use testthat edition 3 (#408).
postgresIsTransacting() (#351, @jakob-r).Redshift() connections, all DBItest tests pass (#358, @galachad).setMethod() calls refer to top-level functions (#380).dbWriteTable() uses savepoints for its transactions, even if an external transaction is open. This does not affect Redshift, because savepoints are not supproted there (#342).dbConnect(check_interrupts = TRUE), interrupting a query now gives a dedicated error message. Very short-running queries no longer take one second to complete (#344).dbQuoteLiteral() correctly quotes length-0 values (#355) and generates typed NULL expressions for NA values (#357).SET DATESTYLE query sent after connecting uses quotes for compatibility with CockroachDB (#360).dbConnect() executes initial queries with immediate = TRUE (#346).libssl-dev in configure script (#350).Redshift() connections now adhere to almost all of the DBI specification when connecting to a Redshift cluster. BLOBs are not supported on Redshift, and there are limitations with enumerating temporary and persistent tables (#215, #326).dbBegin(), dbCommit() and dbRollback() gain name argument to support savepoints. An unnamed transaction must be started beforehand (#13).dbWriteTable() uses a transaction (#307).dbSendQuery() gains immediate argument. Multiple queries (separated by semicolons) can be passed in this mode, query parameters are not supported (#272).dbConnect(check_interrupts = TRUE) now aborts a running query faster and more reliably when the user signals an interrupt, e.g. by pressing Ctrl+C (#336).dbAppendTable() gains copy argument. If set to TRUE, data is imported via COPY name FROM STDIN (#241, @hugheylab).NOTICE messages are now forwarded as proper R messages and can be captured and suppressed (#208).dbQuoteLiteral() converts timestamp values to input time zone, used when writing tables to Redshift (#325).dbSendQuery() and dbQuoteLiteral() use single dispatch (#320).dbWriteTable() and dbAppendTable() default to copy = NULL, this translates to TRUE for Postgres() and FALSE for Redshift() connections (#329).@examplesIf in method documentation.field.types is used in dbWriteTable() (#206).params argument to dbBind() (#266).dbConnect() now issues SET datestyle to iso, mdy to avoid translation errors for datetime values with databases configured differently (#287, @baderstine).dbConnect() defaults to timezone_out = NULL, this means to use timezone.FORCE_AUTOBREW environment variable enforces use of autobrew in configure (#283, @jeroen).configure on macOS, small tweaks (#282, #283, @jeroen).configure script, remove $() not reliably detected by checkbashisms.configure uses a shell script and no longer forwards to src/configure.bash (#265).dbConnect() gains timezone_out argument, the default NULL means to use timezone (#222).dbQuoteLiteral() now quotes difftime values as interval (#270).postgresWaitForNotify() adds LISTEN/NOTIFY support (#237, @lentinj).Redshift driver for connecting to Redshift databases. Redshift databases behave almost identically to Postgres so this driver allows downstream packages to distinguish between the two (#258).Postgres() together with dbConnect() (#242).DOUBLE PRECISION by default (#194).dbWriteTable(copy = FALSE), sqlData() and dbAppendTable() now work for character columns (#209), which are always converted to UTF-8.timezone argument to dbConnect() (#187, @trafficonese).dbGetInfo() for the driver and the connection object.dbConnect() gains check_interrupts argument that allows interrupting execution safely while waiting for query results to be ready (#193, @zozlak).dbUnquoteIdentifier() also handles unquoted identifiers of the form table or schema.table, for compatibility with dbplyr. In addition, a catalog component is supported for quoting and unquoting with Id().dbQuoteLiteral() available for "character" (#209).dbAppendTable() (r-dbi/DBI#249).POSIXt timestamps (#191).sqlData(copy = FALSE) now uses dbQuoteLiteral() (#209).dbUnquoteIdentifier() (#220, @baileych).REAL to DOUBLE PRECISION (#204, @harvey131).dbAppendTable() for own connection class, don’t hijack base class implementation (r-dbi/RMariaDB#119).DbResult and other classes with RSQLite and RMariaDB.std::mem_fn() by boost::mem_fn() which works for older compilers.bigint argument to dbConnect(), supported values are "integer64", "integer", "numeric" and "character". Large integers are returned as values of that type (r-dbi/DBItest#133).temporary and fail_if_missing (default: TRUE) to dbRemoveTable() (r-dbi/DBI#141, r-dbi/DBI#197).dbCreateTable() and dbAppendTable() internally (r-dbi/DBI#74).field.types argument to dbWriteTable() now must be named.current_schemas(true) also in dbListObjects() and dbListTables(), for consistency with dbListFields(). Objects from the pg_catalog schema are still excluded.dbListFields() doesn’t list fields from tables found in the pg_catalog schema.dbListFields() method now works correctly if the name argument is a quoted identifier or of class Id, and throws an error if the table is not found (r-dbi/DBI#75).format() method for SqliteConnection (r-dbi/DBI#163).Id(), DBI::dbIsReadOnly() and DBI::dbCanConnect().dbGetException() is no longer reexported from DBI.dbFetch() and dbGetQuery(). Values of unknown type are returned as character vector of class "pq_xxx", where "xxx" is the “typname” returned from PostgreSQL. In particular, JSON and JSONB values now have class "pq_json" and "pq_jsonb", respectively. The return value of dbColumnInfo() gains new columns ".oid" (integer), ". known" (logical) and ".typname" (character) (#114, @etiennebr)."integer64" are now supported for dbWriteTable() and dbBind() (#178).dbListObjects(), dbUnquoteIdentifier() and Id().x argument to dbQuoteIdentifier() are preserved in the output (r-dbi/DBI#173).dbGetQuery()) are now exported, even if the package doesn’t provide a custom implementation (#168).timegm() with private implementation.PQcancel() if the query hasn’t completed, fixes transactions on Amazon Redshift (#159, @mmuurr).Initial release, compliant to the DBI specification.
bit64 package. This also means that numeric literals (as in SELECT 1) are returned as 64-bit integers. The bigint argument to dbConnect() allows overriding the data type on a per-connection basis.row.names = FALSE.