Well, I can tell you the cause of this behaviour. Take a look at the block of code that actually performs the deletions / cascades / Lifecycle.onDelete fucntionality. It is in SessionImpl in 2.1.x and earlier, and in the new AbstractDeleteCommand (specifically lines 134 - 202) in the new v22branch. That logic goes like:
Code:
List deletionsByOnDelete = null;
HashSet nullifiablesAfterOnDelete = null;
if ( persister.implementsLifecycle() ) {
HashSet oldNullifiables = ( HashSet ) getSession().nullifiables.clone();
ArrayList oldDeletions = ( ArrayList ) getSession().deletions.clone();
getSession().nullifiables.add(key); //the deletion of the parent is actually executed BEFORE any deletion from onDelete()
try {
getLogger().debug( "calling onDelete()" );
if ( ( ( Lifecycle ) object ).onDelete( getSession() ) ) {
//rollback deletion
entry.status = SessionImpl.LOADED;
entry.deletedState = null;
getSession().nullifiables=oldNullifiables;
getLogger().debug("deletion vetoed by onDelete()");
return; //don't let it cascade
}
}
catch (CallbackException ce) {
//rollback deletion
entry.status = SessionImpl.LOADED;
entry.deletedState = null;
getSession().nullifiables=oldNullifiables;
throw ce;
}
//note, the following assumes that onDelete() didn't cause the session
//to be flushed! TODO: add a better check that it doesn't
if ( oldDeletions.size() > getSession().deletions.size() ) {
throw new HibernateException("session was flushed during onDelete()");
}
deletionsByOnDelete = getSession().deletions.subList(
oldDeletions.size(),
getSession().deletions.size()
);
getSession().deletions = oldDeletions;
nullifiablesAfterOnDelete = getSession().nullifiables;
getSession().nullifiables = oldNullifiables;
}
getSession().incrementCascadeLevel();
try {
// cascade-delete to collections BEFORE the collection owner is deleted
Cascades.cascade(
getSession(),
persister,
object,
Cascades.ACTION_DELETE,
Cascades.CASCADE_AFTER_INSERT_BEFORE_DELETE
);
}
finally {
getSession().decrementCascadeLevel();
}
getSession().nullifyTransientReferences( entry.deletedState, propTypes, false, object );
checkNullability( entry.deletedState, persister, true );
getSession().nullifiables.add(key);
// Ensures that containing deletions happen before sub-deletions
getSession().deletions.add(
new ScheduledDeletion( entry.id, version, object, persister, getSession() )
);
if ( persister.implementsLifecycle() ) {
// after nullify, because we don't want to nullify references to subdeletions
getSession().nullifiables.addAll(nullifiablesAfterOnDelete);
// after deletions.add(), to respect foreign key constraints
getSession().deletions.addAll(deletionsByOnDelete);
}
Essentially, before Lifecycle.onDelete is called, a snapshot is taken of all deletes currently scheduled. After Lifecycle.onDelete returns, that snapshot is used to determine any deletions requested by the Lifecycle.onDelete() method (by populating the deletionsByOnDelete list). After Lifecycle.onDelete, but before continuing, the original snapshot is "reinstated" to "postpone" any deletes requested by the Lifecycle.onDelete method until after the requested delete and any actual collection cascades are performed.
I would need to think through this some more to give you a better answer as to the reason and a solution.
FMI, is there a particular reason you cannot map the association between Root and Node and then define the cascade between Root and Node declaratively (using all-delete-orphan or delete-orphan)? That approach would work. I actually do it that way, even mapping collections I never access just to get the cascading delete capability.