Importing database dump from 6.8.2 to 7.0b2

mstevens's Avatar

mstevens

28 Mar, 2012 04:51 PM

I've tried importing my sql dump from version 6.8.2 into a new test instance running 7.0b2. I had a feeling it wouldn't be that simple, with structural changes to the database schema (and judging by cascade.log reporting certain issues... cxml_unpublishable already exists, issing column: blockTwitterAccountName in cascade.cxml_foldercontent).

Is there a guide to upgrading the database structure so it will work with 7.0b2?

  1. Support Staff 1 Posted by Tim on 28 Mar, 2012 04:57 PM

    Tim's Avatar

    Hi,

    The application will make all of the necessary database structure changes. There is no need to manually change the database schema in any way. Can you attach your cascade.log file from the day you attempted the upgrade in your test instance? I'd also like to see the results of the following SQL query (attached as a separate file):

    select * from databasechangelog;
    

    Thanks!

  2. 2 Posted by mstevens on 28 Mar, 2012 05:14 PM

    mstevens's Avatar

    Thanks Tim,

    Good to know. I did drop the "cxml_unpublishable" table manually (hoping it would magically fix things), but realize this may have further complicated things. There's also a possibility the database didn't fully import. It's about 13GB so I had to run the import command in the background to insure the ssh timeout didn't break the pipe.

    Let me know if you need any other info from me.

  3. Support Staff 3 Posted by Tim on 28 Mar, 2012 05:19 PM

    Tim's Avatar

    Hmm, the log file only contains the following message repeatedly:

     ERROR [MemoryQueueSearchJobScheduler] An error occurred while consuming from the lucene event queue: java.lang.NullPointerException
    

    These messages are usually the result of a different, underlying problem (likely the one you mentioned in your description of the problem). Since I don't see the actual startup of Cascade Server logged here, can you try attaching your catalina.out/catalina.log file? I basically am interested in seeing Cascade boot and run into the database issues you mentioned.

    Thanks

  4. 4 Posted by mstevens on 28 Mar, 2012 06:58 PM

    mstevens's Avatar

    I think this cascade.log will be more relevant. I restarted cascade, and did a tail -n 1000, then i removed all the intermingled repetitive ERROR [MemoryQueueSearchJobScheduler] messages (the ones you were referring to above).

  5. Support Staff 5 Posted by Tim on 28 Mar, 2012 09:14 PM

    Tim's Avatar

    Thanks for attaching that! OK, it looks like there may have been a partial upgrade applied to the database before this last update completed. Can you please try the following:

    • Re-import a copy of your production database into a new test database (ie, a database prior to the recent upgrade attempt)
    • Point your test instance of Cascade to this test database
    • Start Cascade Server and let it run until you can browse to the login page

    If you can't get to the login page after 10 minutes or so, keep the process running and attach your latest log files here.

    Thanks!

  6. 6 Posted by mstevens on 29 Mar, 2012 05:58 PM

    mstevens's Avatar

    I've decided to get your sample database working first. Cascade starts up, and cascade.log looked good at first. I'm able to log in and use cascade.

    However, both cascade.log and catalina.out are now getting flooded with lucene event queue messages. Maybe we can fix this issue first, then I'll test out our production database.

    In catalina.out, it's these same 5 lines repeating endlessly:

    ERROR [MemoryQueueSearchJobScheduler] : An error occurred while consuming from the lucene event queue: java.lang.NullPointerException log4j:ERROR Error occured while converting date.
    java.lang.NullPointerException
    log4j:ERROR Error occured while converting date.
    java.lang.NullPointerException

    In cascade.log it's this line over and over:

    ERROR [MemoryQueueSearchJobScheduler] An error occurred while consuming from the lucene event queue: java.lang.NullPointerException

    I would have attached the files if they weren't so huge.

  7. Support Staff 7 Posted by Tim on 29 Mar, 2012 06:51 PM

    Tim's Avatar

    Those error messages you are referring to are symptoms of a different issue which would likely be present in the log files before that message started occurring. Are you able to copy/paste the entire log file up to that point where those messages begin?

  8. 8 Posted by mstevens on 29 Mar, 2012 07:37 PM

    mstevens's Avatar

    Strange, I can't get it to write errors to cascade.log anymore, but i've attached the log results from stopping and starting cascade. It appears to still be dumping the same errors into catalina.out, but hopefully this gives you more to go on.

  9. Support Staff 9 Posted by Tim on 29 Mar, 2012 07:38 PM

    Tim's Avatar

    Can you try attaching the log file again? It doesn't appear to be attached to the last message.

  10. 10 Posted by mstevens on 29 Mar, 2012 07:39 PM

    mstevens's Avatar

    oops, seems to have stripped my attachments. (maybe because I was replying before refreshing the page so it detected your new reply and redirected me).

    here they are...

    haha, stripped again. but this time i'll re-attach them.

  11. Support Staff 11 Posted by Tim on 29 Mar, 2012 09:51 PM

    Tim's Avatar

    Hmm, these logs that were attached aren't really telling us what's going on. The 'start' log looks good and you indicated as such since you mentioned you were able to log in. Can you check to see if you have a catalina.out or catalina.log file from today? If so, can you zip those up and try attaching them here?

  12. 12 Posted by mstevens on 30 Mar, 2012 07:22 PM

    mstevens's Avatar

    I haven't had any luck finding where the lucene event queue error messages begin, because each time i notice they're being logged to the file, they've already filled it up and the successful log messages are nowhere to be found. Perhaps I need to revisit this issue later.

    Sorry to switch focus, but the thing that is making testing more complicated than I'd like is the size of our database. Our production db export is 12GB (I believe the cxml_blob table accounts for 10.5GB of that). Is there any way to take the blob table out of the equation for the purposes of testing? I'm also interested in ways to retroactively cut down the size of the database.

    I realize I'm sort of changing the subject, so if you prefer I open a new ticket I'd be happy to.

  13. 13 Posted by mstevens on 30 Mar, 2012 07:41 PM

    mstevens's Avatar

    Ok, I tried starting with the existing import of our production database (which may have already been partially upgraded :/). At least I was able to get the cascade.log file with info that may help. You'll see where the startup is initialized at 2012-03-30 14:25:30,756 (line 33941), and the error messages began about 8 minutes later.

    Let me know if this helps, or if I need to re-import a fresh export of our production database (which I've been avoiding since it's 12 GB, takes forever, and I have to use nohup and & so the ssh session timeout doesn't kill the mysql > process) and try again.

    Thanks!

  14. Support Staff 14 Posted by Tim on 30 Mar, 2012 08:36 PM

    Tim's Avatar

    Thanks for attaching that. Yes, this time I can see the problem that is eventually leading to the repeated messages. The issue is here:

    Caused By: Table 'cxml_unpublishable' already exists
    

    Basically, an update is trying to create a new table in the database but that table already exists. The table is part of a 7.0 update, so the only way I can think of that would cause it to exist already is if this database has been partially upgraded. I would recommend re-importing a fresh copy of your production database and then running the upgrade again. After you start the upgrade, do not stop the Cascade Server process until you can reach the login screen. If you can't reach the login screen after 30 minutes or so, attach your cascade.log file from that time and leave the process running while we investigate.

    Sorry to switch focus, but the thing that is making testing more complicated than I'd like is the size of our database. Our production db export is 12GB (I believe the cxml_blob table accounts for 10.5GB of that). Is there any way to take the blob table out of the equation for the purposes of testing? I'm also interested in ways to retroactively cut down the size of the database.

    Actually, there will be a feature in 7.0 that should help with this. It is not to be used in place of routine database backups, but it may help you in cases like this. Check out this blog post and read the section on Site Import/Export.

    I'll wait to hear back from you regarding your upgrade attempt after importing a fresh copy of the production database.

    Thanks!

  15. 15 Posted by mstevens on 05 Apr, 2012 02:52 PM

    mstevens's Avatar

    After importing a fresh copy of the database, Cascade appears to be working properly. I'm not sure if the culprit was an incomplete db import, or if I didn't wait long enough when starting up cascade the first time (after import).

    Either way, I'm happy to report it's working. Now, just waiting for 7.0 official release! Any ETA on that?

    Thanks!

  16. Support Staff 16 Posted by Tim on 05 Apr, 2012 02:58 PM

    Tim's Avatar

    Awesome! Glad to hear it's working properly now.

    No word on a final release date for Cascade Server 7. We're still doing some testing and hope to have it out in a few weeks. Obviously that is subject to change depending on how everything goes with regard to testing/QA.

    Thanks!

  17. Tim closed this discussion on 05 Apr, 2012 02:58 PM.

Comments are currently closed for this discussion. You can start a new one.

Keyboard shortcuts

Generic

? Show this help
ESC Blurs the current field

Comment Form

r Focus the comment reply box
^ + ↩ Submit the comment

You can use Command ⌘ instead of Control ^ on Mac