-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 
Author Message
 Post subject: Finding / deleting objects from within interceptor callback?
PostPosted: Tue Mar 09, 2004 2:24 pm 
Expert
Expert

Joined: Thu Jan 08, 2004 6:17 pm
Posts: 278
I have a Hibernate structure that looks like this:

Code:
<hibernate-mapping>

    <class name="com.nimblefish.core.domain.work.WorkRequest">
        <id name="id" type="long" unsaved-value="null" >
            <meta attribute="scope-set">private</meta>
            <generator class="native"/>
        </id>
        <set name="initialJobs" cascade="save-update" lazy="true" inverse="true">
            <key column="workrequest_id"/>
            <one-to-many class="com.nimblefish.core.domain.work.Job"/>
        </set>

    </class>

    <class name="com.nimblefish.core.domain.work.Job">
        <meta attribute="generated-class">com.nimblefish.core.domain.generated.JobGen</meta>

        <id name="id" type="long" unsaved-value="null" >
            <meta attribute="scope-set">private</meta>
            <generator class="native"/>
        </id>

        <many-to-one name="workRequest" column="workrequest_id"
            class="com.nimblefish.core.domain.work.WorkRequest"/>

        <!-- what are this job's successor jobs?  -->
        <set name="successorJobs" table="pred_job_to_succ_job" cascade="save-update">
            <key column="successor_job_id"/>
            <many-to-many class="com.nimblefish.core.domain.work.Job" column="predecessor_job_id"/>
        </set>

        <!-- what are this job's predecessor jobs? -->
        <set name="predecessorJobs" table="pred_job_to_succ_job" inverse="true">
            <key column="predecessor_job_id"/>
            <many-to-many class="com.nimblefish.core.domain.work.Job" column="successor_job_id"/>
        </set>

    </class>

</hibernate-mapping>

So you can see that I have a WorkRequest which references some set of Jobs, and each Job can reference (and be referenced bidirectionally by) some subsequent Jobs. In other words, the WorkRequest points at the initial nodes of a directed acyclic (by construction) graph of Jobs.

In my application, the WorkRequest "owns" all the Jobs it (transitively) points to, so when the WorkRequest is deleted I would like to delete all the Jobs that are (transitively) dependent on it. So I thought, let's be clever. I'll define an interface Deletable that will be implemented by any object that has dependent objects that should be deleted when that object is deleted (by cascade or otherwise). Then I'll define an Interceptor for Deletables that invokes application-specific logic to delete the dependent objects. Then I'll make WorkRequest implement Deletable by deleting all its dependent Jobs. As follows:

Code:
/**
* This object should be deleted via this interface when Hibernate cascades
* a delete to it.  This is used, e.g., for WorkRequests so all their dependent
* Jobs can be deleted when a WorkRequest is deleted.
*/
public interface Deletable {
    /**
     * Delete this object's dependents, according to some
     * application-specific lifecycle rules.
     */
    public void deleteDependents () throws NfPersistenceException;
}

public class LifecycleInterceptor implements Interceptor {
...
    public void onDelete(Object o, Serializable serializable, Object[] objects, String[] strings, Type[] types) throws CallbackException {
        if (o instanceof Deletable) {
            try {
                ((Deletable)o).deleteDependents();
            } catch (NfPersistenceException ex) {
                throw new CallbackException("Can't delete object "+o, ex);
            }
        }
    }
...
}

public class WorkRequest extends com.nimblefish.core.domain.generated.WorkRequestGen
implements Deletable {
...
    /**
     * Delete this WorkRequest (with the help of the DAO layer).
     */
    public void deleteDependents () throws NfPersistenceException {
        WorkFinder.deleteWorkRequestDependentJobs(this.getId());
    }
...
}

public class WorkFinder {
...
    public static void deleteWorkRequestDependentJobs (Long workRequestId) throws NfPersistenceException {
        try {
            List list = Persistence.session().find("from Job job where job.workRequest.id = ?",
                    workRequestId, Hibernate.LONG);
            for (int i = 0; i < list.size(); i++) {
                Job job = (Job) list.get(i);
                Persistence.delete(job);
            }
        } catch (HibernateException e) {
            throw new NfPersistenceException("Can't delete jobs for work request "+workRequestId);
        }
    }
...
}

Unfortunately this doesn't work:
Code:
    [junit] java.lang.StackOverflowError
    [junit]     at net.sf.hibernate.collection.PersistentCollection.initialize(PersistentCollection.java:200)
    [junit]     at net.sf.hibernate.collection.PersistentCollection.read(PersistentCollection.java:69)
    [junit]     at net.sf.hibernate.collection.Set.iterator(Set.java:131)
    [junit]     at net.sf.hibernate.type.PersistentCollectionType.getElementsIterator(PersistentCollectionType.java:102)
    [junit]     at net.sf.hibernate.engine.Cascades.getLoadedElementsIterator(Cascades.java:557)
    [junit]     at net.sf.hibernate.engine.Cascades.access$200(Cascades.java:28)
    [junit]     at net.sf.hibernate.engine.Cascades$4.getCascadableChildrenIterator(Cascades.java:118)
    [junit]     at net.sf.hibernate.engine.Cascades.cascadeCollection(Cascades.java:525)
    [junit]     at net.sf.hibernate.engine.Cascades.cascade(Cascades.java:452)
    [junit]     at net.sf.hibernate.engine.Cascades.cascade(Cascades.java:503)
    [junit]     at net.sf.hibernate.engine.Cascades.cascade(Cascades.java:482)
    [junit]     at net.sf.hibernate.impl.SessionImpl.preFlushEntities(SessionImpl.java:2557)
    [junit]     at net.sf.hibernate.impl.SessionImpl.flushEverything(SessionImpl.java:2197)
    [junit]     at net.sf.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1732)
    [junit]     at net.sf.hibernate.impl.SessionImpl.getQueries(SessionImpl.java:1499)
    [junit]     at net.sf.hibernate.impl.SessionImpl.find(SessionImpl.java:1464)
    [junit]     at net.sf.hibernate.impl.SessionImpl.find(SessionImpl.java:1454)
    [junit]     at net.sf.hibernate.impl.SessionImpl.find(SessionImpl.java:1450)
    [junit]     at com.nimblefish.core.dao.WorkFinder.deleteWorkRequestDependentJobs(WorkFinder.java:107)
    [junit]     at com.nimblefish.core.domain.work.WorkRequest.deleteDependents(WorkRequest.java:51)
    [junit]     at com.nimblefish.core.persist.LifecycleInterceptor.onDelete(LifecycleInterceptor.java:32)
    [junit]     at net.sf.hibernate.impl.SessionImpl.doDelete(SessionImpl.java:1133)
    [junit]     at net.sf.hibernate.impl.SessionImpl.delete(SessionImpl.java:1113)
    [junit]     at net.sf.hibernate.engine.Cascades.deleteOrphans(Cascades.java:545)
    [junit]     at net.sf.hibernate.engine.Cascades.cascadeCollection(Cascades.java:533)
    [junit]     at net.sf.hibernate.engine.Cascades.cascade(Cascades.java:452)
    [junit]     at net.sf.hibernate.engine.Cascades.cascade(Cascades.java:503)
    [junit]     at net.sf.hibernate.engine.Cascades.cascade(Cascades.java:482)
    [junit]     at net.sf.hibernate.impl.SessionImpl.preFlushEntities(SessionImpl.java:2557)
    [junit]     at net.sf.hibernate.impl.SessionImpl.flushEverything(SessionImpl.java:2197)
    [junit]     at net.sf.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1732)
    [junit]     at net.sf.hibernate.impl.SessionImpl.getQueries(SessionImpl.java:1499)
    [junit]     at net.sf.hibernate.impl.SessionImpl.find(SessionImpl.java:1464)
    [junit]     at net.sf.hibernate.impl.SessionImpl.find(SessionImpl.java:1454)
    [junit]     at net.sf.hibernate.impl.SessionImpl.find(SessionImpl.java:1450)
    [junit]     at com.nimblefish.core.dao.WorkFinder.deleteWorkRequestDependentJobs(WorkFinder.java:107)
    [junit]     at com.nimblefish.core.domain.work.WorkRequest.deleteDependents(WorkRequest.java:51)
    [junit]     at com.nimblefish.core.persist.LifecycleInterceptor.onDelete(LifecycleInterceptor.java:32)
    [junit]     at net.sf.hibernate.impl.SessionImpl.doDelete(SessionImpl.java:1133)
    [junit]     at net.sf.hibernate.impl.SessionImpl.delete(SessionImpl.java:1113)


As you can see it's an infinite flush/delete/intercept/deleteDependents/find/flush loop.

Is this just not really supported? Should interceptors themselves not be doing Hibernate finds / deletes? In other words, are Interceptors "not reentrant" in some sense? Is there some other, better pattern for doing this kind of thing? Or should I give up and just forget about overloading Hibernate's cascade with my own application's cascade semantics, and just define a "WorkRequest.delete()" operation that does it all at the application level?

Cheers!
Rob


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 09, 2004 5:15 pm 
Hibernate Team
Hibernate Team

Joined: Tue Aug 26, 2003 12:50 pm
Posts: 5130
Location: Melbourne, Australia
You are not allowed to call the session from an Interceptor. Opening a new session on the same JDBC Connection is allowed.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 09, 2004 5:32 pm 
Expert
Expert

Joined: Thu Jan 08, 2004 6:17 pm
Posts: 278
If I open another Session from the same JDBC connection, will that inherit the same transaction context? (I assume so and further I assume that that is going to work OK....)

It is a little odd though that this would then leave the original session (from which I want to delete these other objects) out of sync with the deletions done in the Interceptor-spawned session.

In general, does this "use an interceptor to handle extended lifecycle application logic" pattern seem to make sense, or does it make you, Gavin, nervous? It would definitely be nicer if there *were* some session-safe Interceptor capability, as it would be a wonderful thing to be able to seamlessly integrate our own cascade behaviors into a single session....

:-)
Cheers!
Rob


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 09, 2004 6:26 pm 
Hibernate Team
Hibernate Team

Joined: Tue Aug 26, 2003 12:50 pm
Posts: 5130
Location: Melbourne, Australia
Quote:
If I open another Session from the same JDBC connection, will that inherit the same transaction context?


of course.

Quote:
It is a little odd though that this would then leave the original session (from which I want to delete these other objects) out of sync with the deletions done in the Interceptor-spawned session.


Oh. Hmmmm ... that is not such a good idea. Probably would not work.

Quote:
In general, does this "use an interceptor to handle extended lifecycle application logic" pattern seem to make sense, or does it make you, Gavin, nervous?


It makes me nervous. Interceptor is intended for handling orthogonal cross-cut concerns like logging and creating audit records. Its not intended for really "doing stuff". Lifecycle is a better choice for that kind of stuff.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 09, 2004 6:59 pm 
Expert
Expert

Joined: Thu Jan 08, 2004 6:17 pm
Posts: 278
!!!!!!!!!!!!!Lifecycle!!!!!!!!!!!!!!!

What an idiot I am -- Lifecycle is *exactly* what I want. Sorry for wasting your time... didn't even realize it existed. (Clearly I haven't reread the reference manual ENOUGH TIMES!)

Incidentally, http://www.hibernate.org/hib_docs/reference/html_single/#manipulating-data-s15 should be updated to mention that the session should not be called from inside an Interceptor (as this is not mentioned anywhere at present). It should probably also say "If you want to use the session from inside an Interceptor, you are probably really wanting to use Lifecycle instead; see section 4.3." (That sure would have kept *me* from sucking your valuable cycles!)

Cheers!
Rob


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 09, 2004 9:55 pm 
Expert
Expert

Joined: Thu Jan 08, 2004 6:17 pm
Posts: 278
OK, so I have given Lifecycle a try, but now I am re-stuck.

So, modifying my example above, I got rid of the LifecycleInterceptor and Deletable classes altogether. Now I have:

Code:
public class WorkRequest extends com.nimblefish.core.domain.generated.WorkRequestGen
implements Lifecycle {
...
    public boolean onDelete(Session session) throws CallbackException {
        try {
            WorkFinder.deleteWorkRequestDependentJobs(this.getId());
        } catch (NfPersistenceException e) {
            throw new CallbackException("Couldn't delete dependent jobs", e);
        }
        return false;
    }
...
}

And everything else as above, with the sole exception of another dependent mapping from Job to Task (which I didn't show before but now is relevant):

Code:
    <class name="com.nimblefish.core.domain.work.Job">

        <id name="id" type="long" unsaved-value="null" >
            <meta attribute="scope-set">private</meta>
            <generator class="native"/>
        </id>

        <!-- to enable polymorphism -->
        <discriminator/>

        <property name="status" type="com.nimblefish.core.domain.work.Status"/>
        <property name="percentComplete" type="float"/>
        <many-to-one name="workRequest" column="workrequest_id"
            class="com.nimblefish.core.domain.work.WorkRequest"/>

        <!-- what are the tasks for this job? -->
        <set name="tasks" cascade="all-delete-orphan" lazy="true" inverse="true">
            <key column="job_id"/>
            <one-to-many class="com.nimblefish.core.domain.work.Task"/>
        </set>
...
        <subclass name="com.nimblefish.core.domain.work.ImportJob"
            extends="com.nimblefish.core.domain.work.Job"
            discriminator-value="IMP">
            <meta attribute="generated-class">com.nimblefish.core.domain.generated.ImportJobGen</meta>

    </class>

    <class name="com.nimblefish.core.domain.work.Task">
        <id name="id" type="long" unsaved-value="null" >
            <meta attribute="scope-set">private</meta>
            <generator class="native"/>
        </id>

        <discriminator/>

        <many-to-one name="job" column="job_id" not-null="true"
            class="com.nimblefish.core.domain.work.Job"/>

        <subclass name="com.nimblefish.core.domain.work.ImportTask"
            extends="com.nimblefish.core.domain.work.Task"
            discriminator-value="IMP">
            <meta attribute="generated-class">com.nimblefish.core.domain.generated.ImportTaskGen</meta>

        </subclass>

    </class>

I am testing this with test code that used to work just fine in a previous version, when I just had a straight parent-to-child relationship from WorkRequest to Job, and then from Job to Task. In that version, I could cascade a delete to WorkRequest, and it would cascade all the way down to Task with no problems. (So I'm not totally ignorant of parent-child issues ;-)

The change I am making, using Lifecycle and creating a non-one-to-many mapping from WorkRequest to Job, seems to be breaking this. In the new version, with the Lifecycle-aware WorkRequest, this cascading delete fails with a foreign key constraint violation. Here's the Hibernate debug output, with as much detail elided as I could:

Code:
    [junit] 17:25:56,291 DEBUG PersistenceManagerTest:245 - *** Removing work request...
    [junit] 17:25:56,307 DEBUG SessionImpl:2193 - flushing session
    [junit] 17:25:56,307 DEBUG Cascades:497 - processing cascades for: com.nimblefish.core.domain.campaign.Client
    [junit] 17:25:56,307 DEBUG Cascades:524 - cascading to collection: com.nimblefish.core.domain.campaign.Client.campaigns
    [junit] 17:25:56,323 DEBUG Cascades:113 - cascading to saveOrUpdate()
    [junit] 17:25:56,323 DEBUG SessionImpl:1306 - saveOrUpdate() persistent instance
    [junit] 17:25:56,323 DEBUG Cascades:506 - done processing cascades for: com.nimblefish.core.domain.campaign.Client
    [junit] 17:25:56,338 DEBUG Cascades:497 - processing cascades for: com.nimblefish.core.domain.campaign.Campaign
    [junit] 17:25:56,338 DEBUG Cascades:524 - cascading to collection: com.nimblefish.core.domain.campaign.Campaign.assets
    [junit] 17:25:56,354 DEBUG Cascades:524 - cascading to collection: com.nimblefish.core.domain.campaign.Campaign.workRequests
    [junit] 17:25:56,354 DEBUG SessionImpl:1099 - deleting a persistent instance
    [junit] 17:25:56,370 DEBUG SessionImpl:1119 - deleting [com.nimblefish.core.domain.work.WorkRequest#1]
    [junit] 17:25:56,370 DEBUG SessionImpl:1146 - calling onDelete()
    [junit] 17:25:56,385 DEBUG WorkFinder:107 - Deleting jobs dependent on work request 1
    [junit] 17:25:56,401 DEBUG SessionImpl:1460 - find: from Job job where job.workRequest.id = ?
.....
    [junit] 17:25:57,213 DEBUG WorkFinder:125 - Deleting job com.nimblefish.core.domain.work.ImportJob@15cd9c0[id=1]
    [junit] 17:25:57,213 DEBUG SessionImpl:1099 - deleting a persistent instance
    [junit] 17:25:57,229 DEBUG SessionImpl:1119 - deleting [com.nimblefish.core.domain.work.ImportJob#1]
    [junit] 17:25:57,229 DEBUG Cascades:497 - processing cascades for: com.nimblefish.core.domain.work.ImportJob
    [junit] 17:25:57,245 DEBUG Cascades:524 - cascading to collection: com.nimblefish.core.domain.work.Job.tasks
    [junit] 17:25:57,245 DEBUG SessionImpl:3132 - initializing collection [com.nimblefish.core.domain.work.Job.tasks#1]
.....
    [junit] 17:25:57,745 DEBUG SessionImpl:2678 - Scheduling collection removes/(re)creates/updates
    [junit] 17:25:57,745 DEBUG SessionImpl:2217 - Flushed: 0 insertions, 0 updates, 3 deletions to 5 objects
    [junit] 17:25:57,745 DEBUG SessionImpl:2222 - Flushed: 0 (re)creations, 0 updates, 7 removals to 12 collections
    [junit] 17:25:57,760 DEBUG Printer:75 - listing entities:
    [junit] 17:25:57,760 DEBUG Printer:82 - com.nimblefish.core.domain.work.ImportTask{endRecord=100, job=ImportJob#1, inputCsvVersion=null, startRecord=0, status=0, id=1}
    [junit] 17:25:57,776 DEBUG Printer:82 - com.nimblefish.core.domain.work.ImportJob{outputVersions=uninitialized, outputLists=uninitialized, predecessorJobs=[], WorkRequest=WorkRequest#1, inputVersions=uninitialized, successorJobs=[], percentComplete=0.0, status=0, inputLists=uninitialized, tasks=[ImportTask#1], id=1}
    [junit] 17:25:57,791 DEBUG Printer:82 - com.nimblefish.core.domain.work.WorkRequest{lastTaskTime=0, campaign=Campaign#1, status=0, initialJobs=uninitialized, name=test work request, id=1}
    [junit] 17:25:57,791 DEBUG Printer:82 - com.nimblefish.core.domain.campaign.Campaign{client=Client#1, assets=uninitialized, workRequests=[], name=test1, id=1, lists=uninitialized}
    [junit] 17:25:57,807 DEBUG Printer:82 - com.nimblefish.core.domain.campaign.Client{campaigns=[Campaign#1], name=test, id=1}
    [junit] 17:25:57,807 DEBUG SessionImpl:2258 - executing flush
    [junit] 17:25:57,823 DEBUG BasicCollectionPersister:491 - Deleting collection: [com.nimblefish.core.domain.work.Job.inputLists#1]
    [junit] 17:25:57,838 DEBUG BatcherImpl:192 - about to open: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:57,838 DEBUG SQL:223 - delete from job_to_input_list where list_id=?
    [junit] 17:25:57,854 DEBUG BatcherImpl:227 - preparing statement
    [junit] 17:25:57,854 DEBUG LongType:46 - binding '1' to parameter: 1
    [junit] 17:25:57,854 DEBUG BatcherImpl:28 - Adding to batch
    [junit] 17:25:57,854 DEBUG BasicCollectionPersister:507 - done deleting collection
    [junit] 17:25:57,870 DEBUG BasicCollectionPersister:491 - Deleting collection: [com.nimblefish.core.domain.work.Job.inputVersions#1]
    [junit] 17:25:57,870 DEBUG BatcherImpl:50 - Executing batch size: 1
    [junit] 17:25:57,870 DEBUG BatcherImpl:199 - done closing: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:57,885 DEBUG BatcherImpl:240 - closing statement
    [junit] 17:25:57,885 DEBUG BatcherImpl:192 - about to open: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:57,901 DEBUG SQL:223 - delete from job_to_input_version where list_id=?
    [junit] 17:25:57,901 DEBUG BatcherImpl:227 - preparing statement
    [junit] 17:25:57,916 DEBUG LongType:46 - binding '1' to parameter: 1
    [junit] 17:25:57,916 DEBUG BatcherImpl:28 - Adding to batch
    [junit] 17:25:57,932 DEBUG BasicCollectionPersister:507 - done deleting collection
    [junit] 17:25:57,932 DEBUG BasicCollectionPersister:491 - Deleting collection: [com.nimblefish.core.domain.work.Job.outputLists#1]
    [junit] 17:25:57,932 DEBUG BatcherImpl:50 - Executing batch size: 1
    [junit] 17:25:57,948 DEBUG BatcherImpl:199 - done closing: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:57,948 DEBUG BatcherImpl:240 - closing statement
    [junit] 17:25:57,948 DEBUG BatcherImpl:192 - about to open: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:57,963 DEBUG SQL:223 - delete from job_to_output_list where list_id=?
    [junit] 17:25:57,963 DEBUG BatcherImpl:227 - preparing statement
    [junit] 17:25:57,963 DEBUG LongType:46 - binding '1' to parameter: 1
    [junit] 17:25:57,979 DEBUG BatcherImpl:28 - Adding to batch
    [junit] 17:25:57,979 DEBUG BasicCollectionPersister:507 - done deleting collection
    [junit] 17:25:57,979 DEBUG BasicCollectionPersister:491 - Deleting collection: [com.nimblefish.core.domain.work.Job.outputVersions#1]
    [junit] 17:25:57,995 DEBUG BatcherImpl:50 - Executing batch size: 1
    [junit] 17:25:58,010 DEBUG BatcherImpl:199 - done closing: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:58,010 DEBUG BatcherImpl:240 - closing statement
    [junit] 17:25:58,026 DEBUG BatcherImpl:192 - about to open: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:58,026 DEBUG SQL:223 - delete from job_to_output_version where list_id=?
    [junit] 17:25:58,041 DEBUG BatcherImpl:227 - preparing statement
    [junit] 17:25:58,041 DEBUG LongType:46 - binding '1' to parameter: 1
    [junit] 17:25:58,041 DEBUG BatcherImpl:28 - Adding to batch
    [junit] 17:25:58,041 DEBUG BasicCollectionPersister:507 - done deleting collection
    [junit] 17:25:58,057 DEBUG BatcherImpl:50 - Executing batch size: 1
    [junit] 17:25:58,057 DEBUG BatcherImpl:199 - done closing: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:58,057 DEBUG BatcherImpl:240 - closing statement
    [junit] 17:25:58,073 DEBUG EntityPersister:553 - Deleting entity: [com.nimblefish.core.domain.work.WorkRequest#1]
    [junit] 17:25:58,073 DEBUG BatcherImpl:192 - about to open: 0 open PreparedStatements, 0 open ResultSets
    [junit] 17:25:58,088 DEBUG SQL:223 - delete from WorkRequest where id=?
    [junit] 17:25:58,088 DEBUG BatcherImpl:227 - preparing statement
    [junit] 17:25:58,088 DEBUG LongType:46 - binding '1' to parameter: 1
    [junit] 17:25:58,104 DEBUG BatcherImpl:28 - Adding to batch
    [junit] 17:25:58,104 DEBUG EntityPersister:553 - Deleting entity: [com.nimblefish.core.domain.work.ImportTask#1]
    [junit] 17:25:58,120 DEBUG BatcherImpl:50 - Executing batch size: 1
    [junit] 17:25:58,120 DEBUG JDBCExceptionReporter:36 - SQL Exception
    [junit] java.sql.BatchUpdateException: General error,  message from server: "Cannot delete or update a parent row: a foreign key constraint fails"    [junit]     at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1469)
    [junit]     at net.sf.hibernate.impl.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:54)
    [junit]     at net.sf.hibernate.impl.BatcherImpl.executeBatch(BatcherImpl.java:118)
    [junit]     at net.sf.hibernate.impl.BatcherImpl.prepareStatement(BatcherImpl.java:55)
    [junit]     at net.sf.hibernate.impl.BatcherImpl.prepareBatchStatement(BatcherImpl.java:105)
    [junit]     at net.sf.hibernate.persister.EntityPersister.delete(EntityPersister.java:565)
    [junit]     at net.sf.hibernate.impl.ScheduledDeletion.execute(ScheduledDeletion.java:29)
    [junit]     at net.sf.hibernate.impl.SessionImpl.executeAll(SessionImpl.java:2308)
    [junit]     at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2266)
    [junit]     at net.sf.hibernate.impl.SessionImpl.flush(SessionImpl.java:2187)
    [junit]     at com.nimblefish.core.persist.Persistence.flush(Persistence.java:300)
    [junit]     at com.nimblefish.core.test.persist.PersistenceManagerTest.testTask(PersistenceManag
erTest.java:247)


So the upshot is that the cascade seems to work, insofar as the final list of entities to be deleted in the flush is correct. The final list of entities is a Client, a Campaign, a WorkRequest, an ImportJob, and an ImportTask, which is exactly right. And the foreign key violation seems to be being thrown when the batcher goes to delete the ImportTask. But there are no parent key constraints on the ImportTask! That is, I can't see which foreign key constraint is being violated here, given that it seems to be the ImportTask that is failing to be peacefully deleted.

Does anyone have any pointers as to how to get more information about which foreign key constraints are failing at the SQL / database level in this kind of cascade failure situation?

Thanks very, very much.
Cheers,
Rob


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 09, 2004 10:10 pm 
Expert
Expert

Joined: Thu Jan 08, 2004 6:17 pm
Posts: 278
Sorry. Rereading that stack trace, it seems that the last thing Hibernate tried to do was to delete the WorkRequest in a one-row batch. This is not possible since the ImportJob has not been deleted yet, and hence the WorkRequest deletion fails.

So perhaps the question I should have asked is: why does Hibernate try to delete the WorkRequest before the Job objects?

I delete the Job objects in a Lifecycle callback method invoked by the WorkRequest delete cascade. So theoretically the Job objects should be deleted *before* the WorkRequest is deleted (that's what I intended when I wrote the code). But that's not what seems to be happening -- it seems that the delete for the one ImportJob object I have here is getting flushed with the other deleted objects, but Hibernate doesn't have enough clue to know to delete it *before* deleting the WorkRequest. Is that possible? If so, is there any fix or workaround?

Setting hibernate.jdbc.batch_size to 0 doesn't change the behavior of this example at all.

I can try to work up a simpler example if that would be helpful.

Cheers,
Rob


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 10, 2004 5:11 pm 
Expert
Expert

Joined: Thu Jan 08, 2004 6:17 pm
Posts: 278
<bump>

I have just posted a follow-on to this topic in the Application Architecture forum (which is not so overwhelmed with posts) at:

http://forum.hibernate.org/viewtopic.ph ... 57#2194257

including a much simpler example with complete source. Definitely some big issues here!

Cheers,
Rob


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.