python - Quickly dumping a database in memory to file -
i want take advantage of speed benefits of holding sqlite database (via sqlalchemy) in memory while go through one-time process of inserting content, , dump file, stored used later.
consider bog-standard database created in usual way:
# in-memory database e = create_engine('sqlite://')
is there quicker way of moving contents disc, other creating brand new database , inserting each entry manually?
edit:
there doubt whether or not i'd see benefits using in-memory database. unfortunately see huge time difference of 120x.
this confusion due me missing out important detail in question. due lack of understanding on part re: caches / page sizes / etc. allow me elaborate:
i running simulations of system have set up, each simulation going through following stages:
- make queries database.
- make calculations / run simulation based on results of queries.
insert
new entries database based on recent simulation.- make sure database date new entries running
commit()
.
while ever make dozen or insertions on each simulation run, run millions of simulations, , results of each simulation need available future simulations take place. say, read and write process takes considerably longer when running file-backed database; it's difference between 6 hours , month.
hopefully clarifies things. can cobble simple python script outline process further little further if necessary.
sqlalchemy , sqlite know how cache , batch-inserts fine.
there no benefit in using in-memory sqlite database here, because database uses pages on-disk version would, , difference pages written disk disk-based database. difference in performance 1.5 times, see sqlite performance benchmark -- why :memory: slow...only 1.5x fast disk?
there no way move in-memory database disk-based database @ later time, short of running queries on in-memory database , executing batch inserts disk-based database on 2 separate connections.
Comments
Post a Comment