-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 
Author Message
 Post subject: Insert problem with cascade
PostPosted: Thu Nov 13, 2003 8:49 am 
Newbie

Joined: Thu Oct 09, 2003 2:52 am
Posts: 6
I have a (so far) simple application with subscribers and attributes associated to subscribers. My mapping is as follows:

Code:
<class name="Subscriber" table="subscriber">
  <id name="subscriberId" unsaved-value="-1">
    <generator class="native" />
  </id> 
  <property name="username" type="string" />
  <set name="attributes" lazy="true" cascade="all">
    <key column="subscriberId" />
    <one-to-many class="SubscriberAttribute" />                   
  </set>
</class>

<class name="SubscriberAttribute" table="subscriber_attribute">
  <id name="attributeId" unsaved-value="-1">
    <generator class="native" />
  </id>
  <property name="attrKey" type="string" />
  <property name="attrValue" type="string" />
  <many-to-one name="subscriber" class="Subscriber"
    column="subscriberId" not-null="true" cascade="none" />
</class>


In my java code I do (simplified a bit to make it short):

Code:
Subscriber s =  new Subscriber(-1, "username");
Set attributes = new HashSet();
attributes.add(new SubscriberAttribute(-1, key1, value1);
attributes.add(new SubscriberAttribute(-1, key2, value2);
attributes.add(new SubscriberAttribute(-1, key3, value3);
s.setAttributes(attributes);
session.save(s);


After I run the code, subscriber is added and all the attributes as well. But what troubles is the way they are created (I know this for sure because I have set profileSql=true in my JDBC settings):

1. a new row is inserted in the subscriber table:
insert into subscriber ...

2. id is fetched from the DB:
SELECT LAST_INSERT_ID()

3. a row to subscriber_attribute table is inserted three times and after each insert id is fetched from the DB:
insert into subscriber_attribute (attrKey, attrValue, subscriberId) values ('key1', 'value1', 1)
SELECT LAST_INSERT_ID()
insert into subscriber_attribute (attrKey, attrValue, subscriberId) values ('key2', 'value2', 1)
SELECT LAST_INSERT_ID()
insert into subscriber_attribute (attrKey, attrValue, subscriberId) values ('key3', 'value3', 1)
SELECT LAST_INSERT_ID()

4. each row inserted in 3. is then updated:
update subscriber_attribute set subscriberId=1 where attributeId=1
update subscriber_attribute set subscriberId=1 where attributeId=2
update subscriber_attribute set subscriberId=1 where attributeId=3

What makes the rows in subscriber_attribute to be updated in the end? I can see no reason for that, the subscriberId is already set when the rows were inserted.

Is there any way to have Hibernate insert the rows (in subscriber_attribute) in one query rather than inserting them one by one?

I'm using Hibernate 2.0.3 and MySQL (with INNODB).


Top
 Profile  
 
 Post subject:
PostPosted: Thu Nov 13, 2003 8:57 am 
Hibernate Team
Hibernate Team

Joined: Tue Aug 26, 2003 12:50 pm
Posts: 5130
Location: Melbourne, Australia
Please read Parent/Child relationship doco.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Nov 13, 2003 9:25 am 
Newbie

Joined: Thu Oct 09, 2003 2:52 am
Posts: 6
That's so obvious, thanks Gavin! Stupid me...

That solves the insert/update problem, but is there any way to have Hibernate insert the rows (in subscriber_attribute) in one query rather than inserting them one by one? According to the documentation there isn't, am I right?


Top
 Profile  
 
 Post subject:
PostPosted: Thu Nov 13, 2003 9:30 am 
Hibernate Team
Hibernate Team

Joined: Tue Aug 26, 2003 12:50 pm
Posts: 5130
Location: Melbourne, Australia
Eh? A SQL INSERT can only insert one row at a time.

I don't get you.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Nov 13, 2003 9:50 am 
Newbie

Joined: Thu Oct 09, 2003 2:52 am
Posts: 6
Nope. In MySQL you can insert several rows with just one insert clause. Like this:

insert into mytable values (a1,b1,c1),(a2,b2,c2),(a3,b3,c3);


Top
 Profile  
 
 Post subject:
PostPosted: Thu Nov 13, 2003 10:10 am 
Hibernate Team
Hibernate Team

Joined: Tue Aug 26, 2003 12:50 pm
Posts: 5130
Location: Melbourne, Australia
oh ok this is nonstandard.

Does MySQL driver support this via the JDBC batch update API? They should, in which case Hibenate can do it already...

DB2 also supports this syntax.

I *could* probably be convinced that it would be worth supporting this syntax, but only if JDBC drivers don't do it already, under the covers.


Would you be able to do a little bit of research into this please?

TIA.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Nov 13, 2003 10:45 am 
Newbie

Joined: Thu Oct 09, 2003 2:52 am
Posts: 6
I tried setting:
Code:
hibernate.jdbc.batch_size 25


It got initialized correctly:
Code:
net.sf.hibernate.impl.SessionFactoryImpl - JDBC 2 max batch size: 25


and I got no errors, but still the inserts were made one by one. Am I right assuming that the MySQL connector I'm using (3.0.8) doesn't support batch updates?


Top
 Profile  
 
 Post subject:
PostPosted: Thu Nov 13, 2003 11:33 am 
Hibernate Team
Hibernate Team

Joined: Tue Aug 26, 2003 12:50 pm
Posts: 5130
Location: Melbourne, Australia
well, the other way they could do it is to have the JDBC driver pass the whole batch of single inserts in one request (this would be the normal way). The end result should be the same performance-wise. Why not email the MySQL list?


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.