c# - Why is this code slowing down? -
i'm in process of converting number of access databases xml files. have done before , still have code previous project. however, code not let me structure xml please need time around. i'm using xdocument
for
-loops achieve gets incredibly slow after couple of 1000 rows of data.
reading in on how xdocument works tells me xelement.add
copies entire xml-code , adds new element pastes file. if true that's problem lies.
this part reads , writes data access xml, take , see if there's way of saving it. converting database 27 columns , 12 256 rows takes 30 minutes while smaller 1 mere 500 rows takes 5 seconds.
private void readwrite(string file) { using (_connection = new oledbconnection(string.format("provider=microsoft.ace.oledb.12.0;mode=12;data source={0}", pathaccess))) { _connection.open(); //gives me values accessdb: tablename, columnname, colcount, rowcount , listoftimestamps. getvalues(pathaccess); xdocument doc = new xdocument(new xdeclaration("1.0", "utf-8", "true"), new xelement(tablename)); (int rowint = 0; rowint < rowcount; rowint++) { xelement item = new xelement("item", new xattribute("time", listoftimestamps[rowint].tostring().replace(" ", "_"))); doc.root.add(item); //colcount"-1" prevents timestamp beeing written again. (int colint = 0; colint < colcount - 1; colint++) { using (oledbcommand cmnd = new oledbcommand(string.format("select {0} {1} timestamp = #{2}#", columnname[colint] , tablename, listoftimestamps[rowint]), _connection)) { xelement value = new xelement(columnname[colint], cmnd.executescalar().tostring()); item.add(value); } } //updates progressbar backgroundworker1.reportprogress(rowint); } backgroundworker1.reportprogress(0); doc.save(file); } }
this code old converter. code pretty unaffected size of database, 12 556 database takes second convert. there possably way merge these two?
public void readwrite2(string file) { dataset dataset = new dataset(); using (_connection = new oledbconnection(string.format("provider=microsoft.ace.oledb.12.0;mode=12;data source={0}", file))) { _connection.open(); datatable schematable = _connection.getoledbschematable(oledbschemaguid.tables, new object[] { null, null, null, "table" }); foreach (datarow datatablerow in schematable.rows) { string tablename = datatablerow["table_name"].tostring(); datatable datatable = dataset.tables.add(tablename); using (oledbcommand readrows = new oledbcommand("select * " + tablename, _connection)) { oledbdataadapter adapter = new oledbdataadapter(readrows); adapter.fill(datatable); } } } dataset.writexml(file.replace(".mdb", ".xml")); }
edit: clarify, application slows down it's executed. in first 500 takes 5 seconds no matter how big database is.
update: okay i've come after weekend , made small adjustment in code seperate reading , writing filling jagged array values in 1 loop , writing them in another. has proven theory wrong , it's infact reading takes time. ideas on how fill array values without hitting database inside loop?
update2: end result after switching datareader.read()
-loop , collecting data right away.
public void readwrite3(string save, string load) { using (_connection = new oledbconnection(string.format("provider=microsoft.ace.oledb.12.0;mode=12;data source={0}", load))) { _connection.open(); getvalues(_connection); _command = new oledbcommand(string.format("select {0} {1}", strcolumns, tables), _connection); xdocument doc = new xdocument(new xdeclaration("1.0", "utf-8", "true"), new xelement("plmslog", new xattribute("machineid", root))); using (_datareader = _command.executereader()) { (int rowint = 0; _datareader.read(); rowint++ ) { (int logint = 0; logint < colcount; logint++) { xelement log = new xelement("log"); doc.root.add(log); elementvalues = updateelementvalues(rowint, logint); (int valint = 0; valint < elements.length; valint++) { xelement value = new xelement(elements[valint], elementvalues[valint]); log.add(value); } } } } doc.save(save); } }
forgive me, think you're making life more complicated needs be. if use oledbdatareader
object can open , read through access table row-by-row without having cache row data in array (since have in datareader).
for example, have sample data
dbid dbname dbcreated ---- ------ --------- bar bardb 2013-04-08 14:19:27 foo foodb 2013-04-05 11:23:02
and following code runs through table...
static void main(string[] args) { oledbconnection conn = new oledbconnection(@"provider=microsoft.ace.oledb.12.0;data source=c:\documents , settings\administrator\desktop\database1.accdb;"); conn.open(); oledbcommand cmd = new oledbcommand("select * mytable", conn); oledbdatareader rdr = cmd.executereader(); int rownumber = 0; while (rdr.read()) { rownumber++; console.writeline("row " + rownumber.tostring() + ":"); (int colidx = 0; colidx < rdr.fieldcount; colidx++) { string colname = rdr.getname(colidx); console.writeline(" rdr[\"" + colname + "\"]: " + rdr[colname].tostring()); } } rdr.close(); conn.close(); console.writeline("done."); }
...and produces result...
row 1: rdr["dbid"]: foo rdr["dbname"]: foodb rdr["dbcreated"]: 2013-04-05 11:23:02 row 2: rdr["dbid"]: bar rdr["dbname"]: bardb rdr["dbcreated"]: 2013-04-08 14:19:27 done.
Comments
Post a Comment