tag:blogger.com,1999:blog-31183337043163992012024-03-08T04:19:06.472-08:00My MySQL RoadDo practicals, Learn subject, Share knowledge, Change ....Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.comBlogger29125tag:blogger.com,1999:blog-3118333704316399201.post-39409562755386335912015-10-05T16:14:00.002-07:002015-10-13T17:58:36.413-07:00Multiple ways to find duplicates rows and delete safely from bulk number of tables<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" trbidi="on">
<div dir="ltr" trbidi="on">
<div dir="ltr" trbidi="on">
<div dir="ltr" trbidi="on">
<div dir="ltr" trbidi="on">
<div dir="ltr" trbidi="on">
Here comes the scenario, we have a bug in our app which introduced duplicate rows into the table where we don't have unique key on the table and How do we find and safely delete the duplicate rows and have a copy of rows what we deleted.<br />
<br />
Preparing a test setup with scenario, create table, insert some duplicates as below and do a select of rows...<br />
<div style="text-align: left;">
<span style="font-family: inherit;"><br /></span>
<br />
<pre style="color: #632035; overflow: auto;"><span style="background-color: #6aa84f;">CREATE TABLE `test` (
`bid` int(11) NOT NULL AUTO_INCREMENT,
`addtime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`refid` varchar(255) NOT NULL,
PRIMARY KEY (`bid`),
KEY `addtime` (`addtime`),
KEY `refid` (`refid`)
) ENGINE=InnoDB AUTO_INCREMENT=128 DEFAULT CHARSET=utf8
MariaDB [test]> insert into test(refid) values ('suresh'),('kumar'),('kuna') ;
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
MariaDB [test]> insert into test(refid) values ('suresh'),('kumar'),('kuna') ;
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
MariaDB [test]> insert into test(refid) values ('suresh'),('kumar'),('kuna') ;
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
MariaDB [test]> insert into test(refid) values ('suresh'),('kumar'),('kuna') ;
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
MariaDB [test]> select * from test ;
+-----+---------------------+--------+
| bid | addtime | refid |
+-----+---------------------+--------+
| 84 | 2015-10-05 14:31:36 | test |
| 85 | 2015-10-05 14:31:36 | test1 |
| 113 | 2015-10-05 14:31:50 | suresh |
| 114 | 2015-10-05 14:31:50 | kumar |
| 115 | 2015-10-05 14:31:50 | kuna |
| 116 | 2015-10-05 15:35:39 | suresh |
| 117 | 2015-10-05 15:35:39 | kumar |
| 118 | 2015-10-05 15:35:39 | kuna |
| 119 | 2015-10-05 15:35:40 | suresh |
| 120 | 2015-10-05 15:35:40 | kumar |
| 121 | 2015-10-05 15:35:40 | kuna |
| 122 | 2015-10-05 15:35:40 | suresh |
| 123 | 2015-10-05 15:35:40 | kumar |
| 124 | 2015-10-05 15:35:40 | kuna |
| 125 | 2015-10-05 15:35:41 | suresh |
| 126 | 2015-10-05 15:35:41 | kumar |
| 127 | 2015-10-05 15:35:41 | kuna |
+-----+---------------------+--------+
17 rows in set (0.00 sec)</span></pre>
</div>
<div style="text-align: left;">
<span style="background-color: #93c47d;"><span style="font-family: inherit;"><br /></span></span>Now, we have a table called test with duplicate rows in the table. </div>
</div>
</div>
<div style="text-align: left;">
</div>
</div>
<div>
<br /></div>
<div>
We can delete the duplicates in several ways...and the most easier way if we have only one table with few rows to delete is to a simple select with where condition as below and delete using subquery select. Run the delete statement in for loop until we see the zero rows left in the where condition.<br />
<pre style="background-color: beige; overflow: auto;"></pre>
<span style="background-color: #6aa84f;"><br /></span>
<br />
<pre style="overflow: auto;"><span style="background-color: #6aa84f;">MariaDB [test]> select refid, count(1) from test where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1;</span></pre>
</div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| refid | count(1) |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| kumar | 5 |</span></div>
<div>
<span style="background-color: #6aa84f;">| kuna | 5 |</span></div>
<div>
<span style="background-color: #6aa84f;">| suresh | 5 |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">3 rows in set (0.00 sec)</span></div>
</div>
<div>
<span style="background-color: #6aa84f;"><br /></span></div>
<div>
<div>
<span style="background-color: #6aa84f;"><br /></span></div>
<div>
<span style="background-color: #6aa84f;">MariaDB [test]> delete from test where refid in (select * from (select refid from test where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1 ) a) limit 1 ;</span></div>
<div>
<span style="background-color: #6aa84f;">Query OK, 1 row affected, 2 warnings (0.00 sec)</span></div>
<div>
<span style="background-color: #6aa84f;"><br /></span></div>
<div>
<span style="background-color: #6aa84f;">MariaDB [test]> select refid, count(1) from test where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1;</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| refid | count(1) |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| kumar | 5 |</span></div>
<div>
<span style="background-color: #6aa84f;">| kuna | 5 |</span></div>
<div>
<span style="background-color: #6aa84f;">| suresh | 4 |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">3 rows in set (0.00 sec)</span></div>
<div>
<span style="background-color: #6aa84f;"><br /></span></div>
<div>
<span style="background-color: #6aa84f;">MariaDB [test]> delete from test where refid in (select * from (select refid from test where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1 ) a) limit 1 ;</span></div>
<div>
<span style="background-color: #6aa84f;">Query OK, 1 row affected, 2 warnings (0.00 sec)</span></div>
<div>
<span style="background-color: #6aa84f;"><br /></span></div>
<div>
<span style="background-color: #6aa84f;">MariaDB [test]> select refid, count(1) from test where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1;</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| refid | count(1) |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| kumar | 4 |</span></div>
<div>
<span style="background-color: #6aa84f;">| kuna | 5 |</span></div>
<div>
<span style="background-color: #6aa84f;">| suresh | 4 |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">3 rows in set (0.00 sec)</span></div>
<div>
<span style="background-color: #6aa84f;"><br /></span></div>
<div>
<span style="background-color: #6aa84f;">MariaDB [test]> delete from test where refid in (select * from (select refid from test where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1 ) a) limit 1 ;</span></div>
<div>
<span style="background-color: #6aa84f;">Query OK, 1 row affected, 2 warnings (0.01 sec)</span></div>
<div>
<span style="background-color: #6aa84f;"><br /></span></div>
<div>
<span style="background-color: #6aa84f;">MariaDB [test]> select refid, count(1) from test where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1;</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| refid | count(1) |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">| kumar | 4 |</span></div>
<div>
<span style="background-color: #6aa84f;">| kuna | 4 |</span></div>
<div>
<span style="background-color: #6aa84f;">| suresh | 4 |</span></div>
<div>
<span style="background-color: #6aa84f;">+--------+----------+</span></div>
<div>
<span style="background-color: #6aa84f;">3 rows in set (0.00 sec)</span></div>
</div>
</div>
</div>
<div>
</div>
<br />
<div>
</div>
Now, the interesting part. Let's say we have thousands of tables in several databases and we have to delete all the duplicates with less effort and as simple as above or more simpler than above.<br />
<br />
For that, we need to have the percona tool called "pt-archiver". Download the tool on the server where you have the databases & tables which have duplicate and make it runnable or executable file.<br />
<br />
Create a new table as same as the table which have duplicates to store the deleted duplicate row ( just in case if we need the data later to compare or to see for some or other reason ). Altered the table and drop the the auto_increament key as we will have duplicates to insert into when we have multiple tables and dropped the primary key.<br />
<br />
Finally, use the pt-archiver to store the duplicate rows in a separate table and start deleting from the bunch of tables. Verify the data in the test_duplicates table which has got deleted.<br />
<br />
NOTE : Please use the --dry-run before running the pt-archiver to check on what is happening with the created statement.<br />
<pre style="background-color: #d0d0d0; overflow: auto;"></pre>
<br />
<pre style="background-color: #d0d0d0; overflow: auto;"></pre>
<pre style="background-color: beige; overflow: auto;"></pre>
<pre style="background-color: beige; overflow: auto;"></pre>
<pre style="background-color: beige; overflow: auto;"></pre>
<span style="background-color: #93c47d;"><br /></span>
<span style="background-color: #93c47d;">wget percona.com/get/pt-archiver</span><br />
<span style="background-color: #93c47d;">chmod +x pt-archiver</span><br />
<span style="background-color: #93c47d;"><br /></span>
<br />
<div class="p1">
<span class="s1" style="background-color: #93c47d;">create table test_duplicates like test ;</span></div>
<div class="p1">
<span class="s1" style="background-color: #93c47d;">alter table test_duplicates modify bid int(11) NOT NULL ;</span></div>
<span style="background-color: #93c47d;"><br /></span>
<br />
<div class="p1">
<span class="s1" style="background-color: #93c47d;">alter table test_duplicates drop primary key ;</span></div>
<div class="p1">
<span class="s1" style="background-color: #93c47d;"><br /></span></div>
<div class="p1">
<span class="s1" style="background-color: #93c47d;">* Inserted a bunch of rows again into the table and executing the pt-archiver.</span></div>
<div class="p1">
<span class="s1" style="background-color: #93c47d;"><br /></span></div>
<div class="p1">
<span style="background-color: #93c47d;"><span class="s1"></span></span></div>
<div class="p1">
<span class="s1" style="background-color: #93c47d;">for i in `mysql -u root -pxxxx test -s -N -e "show tables like 'test%';"` ; do /root/pt-archiver --source h=localhost,D=test,t=$i,u=root,p=xxxx --dest D=test,t=test_duplicates --where " refid in (select max(refid) from $i where addtime between '2015-09-23 00:00:00' and '2015-10-30 00:00:00' group by refid HAVING count(*) > 1 ) " --purge --no-check-columns --no-check-charset --limit=1 --commit-each --statistics --progress=1 ; done</span></div>
<div class="p1">
<span class="s1" style="background-color: #93c47d;"><br /></span></div>
<div class="p1">
<span style="background-color: #93c47d;">TIME ELAPSED COUNT</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 0</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 1</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 2</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 3</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 4</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 5</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 6</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 7</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 8</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 9</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 10</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 11</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 12</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 13</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 14</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 15</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 16</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 17</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 18</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 19</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 20</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 21</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 22</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 23</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 24</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 25</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 26</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 27</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 28</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 29</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 30</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 31</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 32</span></div>
<div class="p1">
<span style="background-color: #93c47d;">2015-10-05T14:35:35 0 32</span></div>
<div class="p1">
<span style="background-color: #93c47d;">Started at 2015-10-05T14:35:35, ended at 2015-10-05T14:35:35</span></div>
<div class="p1">
<span style="background-color: #93c47d;">Source: D=test,h=localhost,p=...,t=test,u=root</span></div>
<div class="p1">
<span style="background-color: #93c47d;">Dest: D=test,h=localhost,p=...,t=test5,u=root</span></div>
<div class="p1">
<span style="background-color: #93c47d;">SELECT 32</span></div>
<div class="p1">
<span style="background-color: #93c47d;">INSERT 32</span></div>
<div class="p1">
<span style="background-color: #93c47d;">DELETE 32</span></div>
<div class="p1">
<span style="background-color: #93c47d;">Action Count Time Pct</span></div>
<div class="p1">
<span style="background-color: #93c47d;">select 33 0.0150 37.79</span></div>
<div class="p1">
<span style="background-color: #93c47d;">commit 66 0.0075 18.95</span></div>
<div class="p1">
<span style="background-color: #93c47d;">deleting 32 0.0058 14.56</span></div>
<div class="p1">
<span style="background-color: #93c47d;">inserting 32 0.0057 14.32</span></div>
<div class="p1">
<span style="background-color: #93c47d;"><span class="s1"></span></span></div>
<div class="p1">
<span style="background-color: #93c47d;">other 0 0.0057 14.37</span></div>
<div class="p1">
<span style="background-color: #93c47d;"><br /></span></div>
<div class="p1">
<span style="background-color: #93c47d;"><br /></span></div>
<div class="p1">
<span style="background-color: #93c47d;">MariaDB [test]> select * from test_duplicates limit 32 ;</span></div>
<div class="p1">
<span style="background-color: #93c47d;">+-----+---------------------+--------+</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| bid | addtime | refid |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">+-----+---------------------+--------+</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 1 | 2015-10-05 12:13:25 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 2 | 2015-10-05 12:13:25 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 3 | 2015-10-05 12:13:25 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 4 | 2015-10-05 12:13:28 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 5 | 2015-10-05 12:13:28 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 6 | 2015-10-05 12:13:28 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 7 | 2015-10-05 12:13:29 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 8 | 2015-10-05 12:13:29 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 9 | 2015-10-05 12:13:29 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 10 | 2015-10-05 12:13:29 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 11 | 2015-10-05 12:13:29 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 12 | 2015-10-05 12:13:29 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 13 | 2015-10-05 12:13:30 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 14 | 2015-10-05 12:13:30 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 15 | 2015-10-05 12:13:30 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 16 | 2015-10-05 12:13:31 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 17 | 2015-10-05 12:13:31 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 18 | 2015-10-05 12:13:31 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 19 | 2015-10-05 12:13:31 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 20 | 2015-10-05 12:13:31 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 21 | 2015-10-05 12:13:31 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 22 | 2015-10-05 12:13:32 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 23 | 2015-10-05 12:13:32 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 24 | 2015-10-05 12:13:32 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 25 | 2015-10-05 12:13:32 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 26 | 2015-10-05 12:13:32 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 27 | 2015-10-05 12:13:32 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 28 | 2015-10-05 12:13:33 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 29 | 2015-10-05 12:13:33 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 30 | 2015-10-05 12:13:33 | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 31 | 2015-10-05 12:13:33 | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 32 | 2015-10-05 12:13:33 | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">+-----+---------------------+--------+</span></div>
<div class="p1">
<span style="background-color: #93c47d;">32 rows in set (0.00 sec)</span></div>
<div class="p1">
<span style="background-color: #93c47d;"><br /></span></div>
<div class="p1">
<span style="background-color: #93c47d;">MariaDB [test]> select * from test;</span></div>
<div class="p1">
<span style="background-color: #93c47d;">+-----+---------------------+--------+</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| bid | addtime | refid |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">+-----+---------------------+--------+</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 33 | 2015-10-05 XX:XX:XX | suresh |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 34 | 2015-10-05 XX:XX:XX | kumar |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">| 35 | 2015-10-05 XX:XX:XX | kuna |</span></div>
<div class="p1">
<span style="background-color: #93c47d;">+-----+---------------------+--------+</span></div>
<div class="p1">
<span style="background-color: #93c47d;">5 rows in set (0.00 sec)</span></div>
<br />
<br />
Now, we have all the duplicates deleted from all the tables. The process with take time as per your number of tables and duplicates. It does take really very long time....<br />
<br />
Due to the time contraint and with some options available in hand, started looking for another option. Select the duplicates from all the tables and insert into a second table with the table name and then do a delete.<br />
<br />
Create a table with additional column tablename, with out primary keys or unique keys.<br />
<br />
<div class="p1">
<span class="s1" style="background-color: #6aa84f;">create table test5 (</span></div>
<div class="p1">
<span class="s1" style="background-color: #6aa84f;"> `bid` int(11) NOT NULL,</span></div>
<div class="p1">
<span class="s1" style="background-color: #6aa84f;">`addtime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,</span></div>
<div class="p1">
<span class="s1" style="background-color: #6aa84f;"> `refid` varchar(255) NOT NULL,</span></div>
<div class="p1">
<span class="s1" style="background-color: #6aa84f;"> `tablename` varchar(255) NOT NULL,</span></div>
<div class="p1">
<span class="s1" style="background-color: #6aa84f;"> KEY `addtime` (`addtime`),</span></div>
<div class="p1">
<span class="s1" style="background-color: #6aa84f;"> KEY `refid` (`refid`)</span></div>
<br />
<div class="p1">
<span class="s1" style="background-color: #6aa84f;">) ENGINE=InnoDB</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">For a bunch of tables with duplicate rows, let's use s simple for loop to select and insert the duplicates into second table and then do a delete with join.</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">
</span></div>
<div class="p1">
<span class="s1" style="background-color: #6aa84f;"> for i in `mysql -u root -proot test -s -N -e " show tables like 'test%';"` ; do `mysql -u root -pxxxx test -e "insert into test1.test5(bid,addtime,refid,tablename) select bid,addtime,refid,'$i' from $i where addtime between '2015-09-23 00:00:00' and '2015-10-11 00:00:00' group by refid HAVING count(*) > 1 ; select sleep(1) ; delete $i from $i inner join test1.test5 b on $i.refid=b.refid and $i.addtime=b.addtime and $i.bid=b.bid where 1=1 ; ” ` ; done</span></div>
<div>
<br />
This one worked pretty well and we will have the backup of deleted rows. Go ahead and try...</div>
</div>
Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-13715328215624745362012-01-23T02:26:00.000-08:002015-10-05T16:34:30.297-07:00Replication stopped with duplicate entry upon starting up of MySQL from cold backup<div dir="ltr" style="text-align: left;" trbidi="on">
A few days ago, we faced a Duplicate entry problem in the mysqld server on one of our customer’s slave after cold backup. We do regular data sync checks for all of our customer's as daily reports and check for data discrepancies between master and slave server's, if any got picked up by our checks to investigate, and there was no issues for the particular server and with the report. When checked the entry details, there is a row with the same data in the table that was mentioned by Slave status Last_error entity.
Quick check on the error log showed that the slave was started after cold back's and immediately stopped with a duplicate entry error and the mysqld server version is 5.0.77-log Source distribution. From the analysis of the below statements, we found that the slave SQL thread was stopped with the error at a position backwards than the SQL thread initialized after cold backup's.<br />
<br />
[perl]<br />
[pythian@dol5 ~]$ tail -20 /var/log/mysql/dol5-error.log<br />
111221 7:42:33 [Note] Slave I/O thread: connected to master 'sslrepl@dol50.dollive.com:3306', replication started in log 'dol50-binlog.000763' at position 533765390<br />
111222 7:05:55 [Note] /usr/libexec/mysqld: Normal shutdown<br />
111222 7:05:55 [Note] Slave SQL thread exiting, replication stopped in log 'dol50-binlog.000767' at position 357198241<br />
111222 7:05:55 [Note] Slave I/O thread killed while reading event<br />
111222 7:05:55 [Note] Slave I/O thread exiting, read up to log 'dol50-binlog.000767', position 357511554
111222 7:05:55 InnoDB: Starting shutdown...
111222 7:06:01 InnoDB: Shutdown completed; log sequence number 372 128185461<br />
111222 7:06:01 [Note] /usr/libexec/mysqld: Shutdown complete<br />
111222 07:06:02 mysqld ended
111222 07:42:09 mysqld started<br />
111222 7:42:18 InnoDB: Started; log sequence number 372 128185461<br />
111222 7:42:20 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.77-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution<br />
111222 7:42:20 [Note] Slave SQL thread initialized, starting replication in log 'dol50-binlog.000767' at position 357198241, relay log '/var/lib/mysql/dol50-relay.000066' position: 357198386<br />
111222 7:42:20 [Note] Slave I/O thread: connected to master 'sslrepl@dol50.dollive.com:3306', replication started in log 'dol50-binlog.000767' at position 357511554<br />
111222 7:42:21 [ERROR] Slave: Error 'Duplicate entry '94335' for key 1' on query. Default database: 'live'. Query: 'insert into charges (amount, key, id) values (0.37, 'dlive', 23892)', Error_code: 1062<br />
111222 7:42:21 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'dol50-binlog.000767' position 357198241<br />
<br />
The slave status output on the server that which error occurred is as below.<br />
<br />
mysql> show slave status\G<br />
*************************** 1. row ***************************<br />
......
...........<br />
Last_Errno: 1062<br />
Last_Error: Error 'Duplicate entry '94335' for key 1' on query. Default database: 'live'. Query: 'insert into charges (amount, key, id) values (0.37, 'dlive', 23892)'<br />
......
..........<br />
1 row in set (0.00 sec)<br />
[/perl]<br />
<br />
Another quick check of the transaction details in master binary logs revealed that the transaction involved is the mixture of storage engines InnoDB and MyISAM. At this point, we have understood that the Duplicate entry error was due to the usage of different storage engines for tables involved in a transaction but how the rollback was performed is still a Question and it was not from application used by Customer. Here is the simple example below that shows the rows inserted into MyISAM table couldn't rolled back though we rollback the transaction.<br />
<br />
[perl]<br />
* Creating a table with MyISAM engine.<br />
mysql> CREATE TABLE myisam_tbl ( item int, val int ) engine=MyISAM ;<br />
Query OK, 0 rows affected (0.06 sec)<br />
<br />
* Creating a table with Innodb engine.<br />
<br />
mysql> CREATE TABLE innodb_tbl ( item int, val int ) engine=Innodb ;<br />
Query OK, 0 rows affected (0.02 sec)<br />
<br />
* Starting transaction, inserting rows into both tables and issues rollback.<br />
<br />
mysql> START TRANSACTION ;<br />
Query OK, 0 rows affected (0.06 sec)<br />
<br />
mysql> INSERT INTO myisam_tbl (item, val) VALUES (1, 1), (2, 2);<br />
Query OK, 2 rows affected (0.02 sec)
Records: 2 Duplicates: 0 Warnings: 0<br />
<br />
mysql> INSERT INTO innodb_tbl (item, val) VALUES (1, 1), (2, 2);<br />
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0<br />
<br />
mysql> ROLLBACK ;<br />
Query OK, 0 rows affected, 1 warning (0.00 sec)<br />
<br />
mysql> show warnings ;
+---------+------+---------------------------------------------------------------+
| Level | Code | Message |
+---------+------+---------------------------------------------------------------+
| Warning | 1196 | Some non-transactional changed tables couldn't be rolled back |
+---------+------+---------------------------------------------------------------+
1 row in set (0.00 sec)<br />
<br />
* This shows the rows inserted into MyISAM engine table exist and commited.<br />
<br />
mysql> select * from myisam_tbl ;<br />
+------+------+
| item | val |
+------+------+<br />
| 1 | 1 |
| 2 | 2 |<br />
+------+------+<br />
2 rows in set (0.00 sec)<br />
<br />
* This show the rows inserted into Innodb engine table were not commit-ed.
mysql> select * from innodb_tbl ;
Empty set (0.00 sec)
[/perl]<br />
<br />
<br />
We did a check and collected the MyISAM and Innodb tables involved SQL statements in the transaction from binary logs at the position where the slave was stopped with an error. We have deleted all the inserted rows of the MyISAM engine involved in the tables of the transaction manually all at once on the slave server ( which caused the duplicate entry errors ) and started slave, it worked fine and no other error's found after it. If we delete one of the inserted rows of MyISAM involved tables one at a time and start the slave, then the replication thread will start executing the whole transaction again from the starting position where replay of transaction inserts the previously deleted entry and will stop at the second entry and goes on, it continues in round-robin fashion and the transaction never completes as it executes from the start position of the transaction again when we issue START SLAVE unless we skip the error which skips the complete transaction.<br />
<br />
After a detailed investigation of the incident by another DBA, we found that the cause was determined to be a transaction involving both MyISAM and InnoDB tables, which was rolled backed during MySQL shutdown and was applied again after start-up (this is a normal MySQL behavior). More details on the MySQL server shutdown process can be found <a href="http://dev.mysql.com/doc/refman/5.0/en/server-shutdown.html">here</a> and see point 4 for rollback of SQL thread during shutdown if it is in the middle of transaction. The STOP SLAVE transaction behavior has been fixed in the version 5.1.35 as per change log to avoid the problems encountered in replication for transactional engine stated <a href="http://dev.mysql.com/doc/refman/5.1/en/news-5-1-35.html">here</a>, <b>but it was impossible to roll back a mixed-engines transaction when one of the engines was non-transactional, which meant that the slave could not safely re-apply any transaction that had been interrupted by STOP SLAVE</b>. While changes to InnoDB table were rolled back and then applied normally, changes to MyISAM table couldn't be rolled-back and were applied twice, as MyISAM is non-transactional engine where every single SQL statement will be committed right after the execution.<br />
<br />
We made some recommendations to the Customer as below in which the option 3 was chosen as the slave was dedicated for backup's and a quick solution applicable for their data set and timeline for resolution of the issue.<br />
<br />
1) Change queries or table types to ensure that transactional and non-transactional tables are not mixed in any transaction<br />
2) Upgrade to MySQL 5.1.35 or later in which the stop slave behavior has been changed to avoid this particular issue ( transaction rollback during MySQL shutdown ).<br />
3) Consider moving to hot backups which does not require MySQL to be shutdown and always ends at a completed transaction.</div>
Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-16833363183384024852011-08-31T08:01:00.000-07:002011-08-31T08:01:50.590-07:00Alter waiting on Select statement to completeA few days ago, we faced an interesting problem on one of our customer’s slave mysqld servers. An Alter for adding a new column was run on master server took 542 seconds where as it took few hours on the slave server to complete due to a SELECT blocking the Alter was not allowed to complete.<br />
<br />
Here is the the select on the master server and it’s execution time from the binary logs.<br />
<pre style="overflow: auto;background-color: beige"><span>
# at 825737566
#110720 19:55:21 server id 31415 end_log_pos 825737730 Query thread_id=83250629 exec_time=542 error_code=0
use collect/*!*/;
SET TIMESTAMP=1311216921/*!*/;
/*!\C latin1 *//*!*/;
SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/;
ALTER TABLE `track` ADD `source_track_id` int(10) UNSIGNED NULL DEFAULT NULL AFTER `vendor_track_id`
/*!*/;
</span>
</pre>Alter statement completed well on the master and it got blocked by a SELECT on the slave where as the time frames of the threads in the processlist which were executing the Alter and the SELECT resembles the ALTER was started first and then the SELECT next. As per the binary logs on the master the Alter was a single statement by the thread_id and the slave started lagging.<br />
<pre style="overflow: auto;background-color: beige"><span>
mysql> show processlist ;
+---------+-------------+----------------------------+-------------+-------------+--------+----------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+---------+-------------+----------------------------+-------------+-------------+--------+----------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
| 864 | system user | | NULL | Connect | 479352 | Waiting for master to send event | NULL |
| 865 | system user | | collect | Connect | 5383 | rename result table | ALTER TABLE `track` ADD `source_track_id` int(10) UNSIGNED NULL DEFAULT NULL AFTER `vendor_track_id |
| 3277392 | ios | intreport:56045 | collect | Query | 4850 | Copying to tmp table | SELECT /* Reporting Query 357 */
fdr.service_provider_code AS 'Provider',
fdr.service_provider_count |
| 3279284 | uios | solr02:47871 | collect | Query | 4503 | Waiting for table | select track_id,
vendor_track_id,
track_name,
if(promo = 'Y', 'true', 'false') as promo_track,
if( |
.........
.........
Seconds_Behind_Master: 5131
</span>
</pre>We did the possible checks to find the status of the Alter and why is it waiting at the RENAME RESULT TABLE to complete. The file level checks shows that the creation of the temp table for Alter was completed while ago.<br />
<pre style="overflow: auto;background-color: beige"><span>
pythian@db04:~$ ls -ltr /ssd2/mysql/collect/track.*
-rwxr-xr-x 1 mysql mysql 19153 2011-07-15 06:11 /ssd2/mysql/collect/track.frm
-rwxr-xr-x 1 mysql mysql 8980 2011-07-15 06:11 /ssd2/mysql/collect/track.TRG
-rwxr-xr-x 1 mysql mysql 2646605824 2011-07-20 20:40 /ssd2/mysql/collect/track.ibd
pythian@db04:~$ ls -ltr /mysql/data/collect/track.*
lrwxrwxrwx 1 mysql mysql 26 2011-07-15 08:08 /mysql/data/collect/track.TRG -> /ssd2/mysql/collect/track.TRG
lrwxrwxrwx 1 mysql mysql 26 2011-07-15 08:08 /mysql/data/collect/track.ibd -> /ssd2/mysql/collect/track.ibd
lrwxrwxrwx 1 mysql mysql 26 2011-07-15 08:08 /mysql/data/collect/track.frm -> /ssd2/mysql/collect/track.frm
pythian@db04:~$ ls -ltr /mysql/data/collect/*#*
-rwxr-xr-x 1 mysql mysql 19205 2011-07-20 20:04 /mysql/data/collect/#sql-7130_361.frm
-rwxr-xr-x 1 mysql mysql 2499805184 2011-07-20 20:23 /mysql/data/collect/#sql-7130_361.ibd
pythian@db04:~$ date
Wed Jul 20 22:13:27 PDT 2011
</span>
</pre>The show engine innodb status shows that the Alter was not started yet though it was started, created the temp table and waiting at the the RENAME TABLE operation. The SELECT query which is ACTIVE for 9168 seconds blocked the Alter statement and not allowing it to complete.<br />
<pre style="overflow: auto;background-color: beige"><span>
---TRANSACTION 379ED1F1F, not started, process no 28976, OS thread id 1259583824
mysql tables in use 1, locked 2
MySQL thread id 865, query id 221087068 rename result table
ALTER TABLE `track` ADD `source_track_id` int(10) UNSIGNED NULL DEFAULT NULL AFTER `vendor_track_id`
---TRANSACTION 379ED1883, ACTIVE 9168 sec, process no 28976, OS thread id 1279453520 fetching rows, thread declared inside InnoDB 49
mysql tables in use 8, locked 0
MySQL thread id 3277392, query id 221086895 intreport 10.1.4.151 collect Copying to tmp table
SELECT /* Reporting Query 357 */
fdr.service_provider_code AS 'Provider',
</span>
</pre>A few guesses from mates saying that<br />
<br />
1) The select has been started first so it is blocking Alter. ( Not true in this case )<br />
2) ALTER started first and before it completes, SELECT fires, fast index creation of secondary indexes allows concurrent SELECTS but still has to get global lock during RENAME as per the explanation in docs in <a href="http://dev.mysql.com/doc/innodb/1.1/en/innodb-create-index-concurrency.html">this blog post</a>. But the Alter in this case is not a secondary index rebuild.<br />
<br />
A little more digging into documentation revealed that ALTER TABLE makes a temporary copy of the original table. MySQL waits for other operations that are modifying the table, then proceeds. It incorporates the alteration into the copy, deletes the original table, and renames the new one. While ALTER TABLE is executing, the original table is readable by other sessions, but at the end ALTER still has to acquire global lock on data dictionary to swap original and new table copies, that is why we could see other threads in “Waiting for table” state. Updates and writes to the table that begin after the ALTER TABLE operation begins are stalled until the new table is ready, then are automatically redirected to the new table without any failed updates. So what happened in this situation was: ALTER started first, then concurrent SELECT started on the same table, when ALTER finished copy to a temp table it tried to RENAME table, but failed to get global lock on data dictionary. All the threads that started after this point had to wait on ALTER to finish.<br />
<br />
More details on the Alter operation can be found <a href="http://dev.mysql.com/doc/refman/5.1/en/alter-table.html">here</a>.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-11475224576550916672011-06-20T23:15:00.000-07:002011-06-20T23:16:11.326-07:00Investigating Sort aborted issuesA few days ago, I and few of my fellow members had a discussion about the "Sort aborted" error in the error log should be monitored or it can be ignored because it doesn't bring any value by alerting the oncall DBA who checks the machine and mysqld server status when the error appears.<br />
<br />
We have identified the possible or known causes of this error were <br />
<br />
1) It could be generated when <br />
a) users cancelling or killing queries in the middle of a filesort <br />
b) web servers rebooting while some queries were sorting i.e when the connection to the database drops suddenly <br />
2) Another is lack of sufficient temp disk space in the tmpdir prevented tmpfile from being created for queries to execute. <br />
<br />
As a DBA, we can't do much with the first point apart from informing customer to check at the application side for connection drop outs.<br />
The answer for the second point is to change tmpdir in the mysql to point to a directory (a new one, just for this purpose) in the biggest file system.<br />
<br />
3) a transaction got rolled back or aborted due to lock wait timeout or deadlock.<br />
<br />
In a database, a deadlock is a situation in which two or more transactions are waiting for one another to give up locks.<br />
Even if a transaction is not involved in a deadlock, it might have to wait a considerable amount of time to obtain a lock because of a long-running transaction or transactions holding locks on the tables it needs. In such a situation, you might not want a transaction to wait indefinitely. Instead, you might want the waiting transaction to abort, or time out, after a reasonable amount of time, called a lock wait timeout.<br />
<br />
In such cases, we have to increase the innodb_lock_wait_timeout value in the database server if we want the transaction to wait for more time and complete. We can find the deadlock details in "LATEST DETECTED DEADLOCK" section of the show engine innodb status\G output and therefore needs an investigation of the whole sql statements executed by both transactions with the application dev team to identify the root cause and it's resolution.<br />
<br />
4) Unexpected errors, such as source table or even tmp table was corrupt processing of a subquery failed which was also sorting.<br />
<br />
In this case, we have to repair the corrupted source table using repair table table_name or mysqlcheck.<br />
<br />
5) insufficient memory for sort_buffer_size allocated or Out of sort memory; increase server sort buffer size.<br />
<br />
Check the sort buffer size allocated on the server and increase a little at a time in amount of memory because it is a per connection variable and can cause out of memory on the server itself.<br />
<br />
[perl]<br />
mysql> show global variables like '%sort%';<br />
+---------------------------+--------------+<br />
| Variable_name | Value |<br />
+---------------------------+--------------+<br />
| max_length_for_sort_data | 1024 |<br />
| max_sort_length | 1024 |<br />
| myisam_max_sort_file_size | 107374182400 |<br />
| myisam_sort_buffer_size | 36700160 |<br />
| sort_buffer_size | 262144 |<br />
+---------------------------+--------------+<br />
5 rows in set (0.14 sec)<br />
[/perl]<br />
<br />
Apart from the above causes, Few more things needs to be checked when we see this type of messages in the error logs are :<br />
<br />
1) How big is your biggest single table (data + index) ?<br />
2) Do you have any JOINs that involve a huge amount of data being scanned or returned ?<br />
3) What file-system do you have on your machine ? <br />
4) Can you identify the queries that were running at that time ? How long does it run ?<br />
<br />
Let's go through the above things in detail.<br />
<br />
To find out the biggest table size in the database, execute the below sql query <br />
<br />
[perl]<br />
SELECT SQL_CALC_FOUND_ROWS TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE, ENGINE, TABLE_ROWS,<br />
TRUNCATE(DATA_LENGTH/pow(1024,2),2) as 'DATA (M)',<br />
TRUNCATE(INDEX_LENGTH/pow(1024,2),2) as 'INDEX (M)',<br />
TRUNCATE(((DATA_LENGTH+INDEX_LENGTH))/pow(1024,2),2) AS 'TOTAL(M)'<br />
FROM information_schema.tables<br />
WHERE TABLE_SCHEMA not in ( 'information_schema', 'mysql')<br />
AND TABLE_TYPE = 'BASE TABLE'<br />
ORDER BY 8 DESC<br />
LIMIT 1;<br />
[/perl]<br />
<br />
The easiest way to find the query with JOINs that involve a huge amount of data being scanned or returned is to enable slow query log on the database server and run the below command on your shell prompt by replacing the actual slow query log, which gives the maximum rows scanned list in the slow query log. <br />
<br />
[perl]<br />
$less slowquery.log | grep -i "# Query_time:" | awk '{print $9}' | sort -rn | more<br />
123949125<br />
123947793<br />
123858573<br />
123855366<br />
76087924<br />
75933311<br />
75776390<br />
75691301<br />
75671196<br />
75556710<br />
[/perl]<br />
<br />
After finding out the top few large scan number details, then run the below by replacing the highest number in the grep condition to see the query details.<br />
<br />
[perl]<br />
$less slowquery.log | grep 123949125 -A 2 B4 | more<br />
[/perl]<br />
<br />
Once we got the list of queries which are going for large scan, we can tune it by using Explain plan, creating appropriate indexes for the columns involved in the joins and the sort order etc.<br />
<br />
Coming to different filesystems and it's limitations, it can be find <a href="http://en.wikipedia.org/wiki/Comparison_of_file_systems">here</a><br />
The filesystems and it's type on a linux machine can be found by running the below commands in the shell prompt.<br />
<br />
[perl]<br />
[suresh@home]$ df -T<br />
Filesystem Type 1K-blocks Used Available Use% Mounted on<br />
/dev/mapper/sysvg-root<br />
ext3 470000608 24019012 421721808 6% /<br />
/dev/mapper/mysqlvg-mysql<br />
ext3 945112464 289180564 607148540 33% /mysql<br />
/dev/sda1 ext3 505604 17557 461943 4% /boot<br />
tmpfs tmpfs 24724440 0 24724440 0% /dev/shm<br />
<br />
[suresh@home]$ mount<br />
/dev/mapper/sysvg-root on / type ext3 (rw)<br />
proc on /proc type proc (rw)<br />
sysfs on /sys type sysfs (rw)<br />
devpts on /dev/pts type devpts (rw,gid=5,mode=620)<br />
/dev/mapper/mysqlvg-mysql on /mysql type ext3 (rw)<br />
/dev/sda1 on /boot type ext3 (rw)<br />
tmpfs on /dev/shm type tmpfs (rw)<br />
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)<br />
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)<br />
[/perl]<br />
<br />
If we have reached the file system size limits for a give table then we have go for better filesystems.<br />
<br />
To find the queries that are running at that time and how long is to log the processlist into a log file periodically using the below command.<br />
<br />
[perl]<br />
mysqladmin --user=xxxx --password=xxxxx --sleep=300 processlist<br />
[/perl]<br />
<br />
The above are the possible ways of finding the possible causes and it's resolution methods for the error "Sort aborted".<br />
<br />
Previously, earlier to the version 5.5.10, for queries that were aborted due to sort problem, the server wrote the message Sort aborted to the error log.<br />
In the latest versions, developers realize that they need to add more details to the error log when ever the error occurs, mysqld server writes more information to provide a more specific message which is useful, such as:<br />
<br />
[perl]<br />
[ERROR] mysqld: Sort aborted: Out of memory (Needed 24 bytes)<br />
<br />
[ERROR] mysqld: Sort aborted: Out of sort memory,<br />
consider increasing server sort buffer size<br />
<br />
[ERROR] mysqld: Sort aborted: Incorrect number of arguments for<br />
FUNCTION test.f1; expected 0, got 1<br />
In addition, if the server was started with --log-warnings=2,<br />
the server write information about the host, user, and query.<br />
[/perl]<br />
<br />
In addition, if the server was started with --log-warnings=2, the server write information about the host, user, and query.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-83145977516725821702011-05-25T03:30:00.000-07:002011-06-20T23:13:16.991-07:00InnoDB error : Total number of locks exceeds the lock table sizeRecently, one of the clients that I'm working with had a problem with one of their replication slaves where an sql statement gave an error while executing on the slave which got executed successfully on the Master server.<br />
<br />
The slave stopped with the error as below.<br />
<pre style="overflow: auto;background-color: beige"><span>
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: reporting.com
Master_User: replication
Master_Port: 3306
Connect_Retry: 10
Master_Log_File: mysql3308-bin-log.000488
Read_Master_Log_Pos: 340658086
Relay_Log_File: mysql3319-relay-log.000156
Relay_Log_Pos: 4983506
Relay_Master_Log_File: mysql3308-bin-log.000462
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1206
Last_Error: Error 'The total number of locks exceeds the lock table size' on query. Default database: 'production'. Query: 'create table Summary SELECT Dimension.calendarYearMonth, count(*) as EmailsSent, count(distinct Email_Sent.UserID) as "Unique Recipients"
FROM Dimension, Email_Sent
WHERE
Email_Sent.DateID = Dimension.id
AND Dimension.date >= '2010-11-01'
GROUP BY 1'
Skip_Counter: 0
Exec_Master_Log_Pos: 4983353
Relay_Log_Space: 14212286599
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1206
Last_SQL_Error: Error 'The total number of locks exceeds the lock table size' on query. Default database: 'production'. Query: 'create table Summary SELECT Dimension.calendarYearMonth, count(*) as EmailsSent, count(distinct Email_Sent.UserID) as "Unique Recipients"
FROM Dimension, Email_Sent
WHERE
Email_Sent.DateID = Dimension.id
AND Dimension.date >= '2010-11-01'
GROUP BY 1'
1 row in set (0.00 sec)
</span>
</pre><br />
I have started the slave and monitored the query running status using "show engine innodb status\G". According to the EXPLAIN plan of the query, it is estimated to scan and lock about 910 million rows. The below InnoDB status showed that it already locked 677 million rows and the slave got stopped again with the same error.<br />
<br />
This is an error specific to InnoDB tables. The query mentioned above attempts to set locks on all records scanned, to ensure the data returned doesn’t change before creation of the new table. These locks are put into an area of memory allocated within innodb_buffer_pool_size variable.<br />
<br />
<br />
<span><br />
<pre style="overflow: auto;background-color: beige"><span>
---TRANSACTION 5722EE5D, ACTIVE 18100 sec, process no 13052, OS thread id 1222195536 fetching rows
mysql tables in use 2, locked 2
2653776 lock struct(s), heap size 327759856, 677089582 row lock(s)
MySQL thread id 2860, query id 1863366 Copying to tmp table on disk</span>
create table Summary SELECT Dimension.calendarYearMonth, count(*) as EmailsSent, count(distinct Email_Sent.UserID) as "Unique Recipients"
FROM Dimension, Email_Sent
WHERE
Email_Sent.DateID = Dimension.id
AND Dimension.date >= '2010-11-01'
GROUP BY 1
TABLE LOCK table `production`.`Email_Sent` trx id 5722EE5D lock mode IS
RECORD LOCKS space id 5623 page no 11 n bits 440 index `ClientID` of table `production`.`Email_Sent` trx id 5722EE5D lock mode S
TABLE LOCK table `production`.`Date_Dimension` trx id 5722EE5D lock mode IS
RECORD LOCKS space id 5611 page no 34 n bits 240 index `PRIMARY` of table `production`.`Date_Dimension` trx id 5722EE5D lock mode S locks rec but not gap
RECORD LOCKS space id 5623 page no 12 n bits 440 index `ClientID` of table `production`.`Email_Sent` trx id 5722EE5D lock mode S
RECORD LOCKS space id 5623 page no 15 n bits 272 index `ClientID` of table `production`.`Email_Sent` trx id 5722EE5D lock mode S
RECORD LOCKS space id 5623 page no 18 n bits 440 index `ClientID` of table `production`.`Email_Sent` trx id 5722EE5D lock mode S
RECORD LOCKS space id 5623 page no 21 n bits 224 index `ClientID` of table `production`.`Email_Sent` trx id 5722EE5D lock mode S
RECORD LOCKS space id 5623 page no 29 n bits 440 index `ClientID` of table `production`.`Email_Sent` trx id 5722EE5D lock mode S
RECORD LOCKS space id 5623 page no 31 n bits 440 index `ClientID` of table `production`.`Email_Sent` trx id 5722EE5D lock mode S
TOO LOCKS PRINTED FOR THIS TRX: SUPPRESSING FURTHER PRINTS
</span>
</pre><br />
We found the main difference in MySQL configuration between Master and Slave was amount of memory allocated to InnoDB_buffer_pool_size because Slave was running several MySQL instances that were used for backup purposes.<br />
<br />
Need to increase the memory of innodb_buffer_pool_size variable where the locks table can fit in the configuration file and restart the database should fix the problem.<br />
<br />
InnoDB uses buffer pool to cache data and indexes of its tables, but it also uses it to store system information, like locks table. The query set locks on all the records it scans. If we have a small buffer pool size, then the InnoDB lock table may indeed grow so big that it does not fit in the buffer pool. Locks table size for the 677 million rows was 327M according to the innodb status. So, we have increased the innodb_buffer_pool_size from 500MB to 1GB which was enough to hold all the locks and restarted the database server. Now the SQL query completed successfully on the slave.<br />
<br />
<br />
More details on the Innodb row level locks and the size it takes can be found <a href="http://www.mysqlperformanceblog.com/2006/07/13/how-much-memory-innodb-locks-really-take/">here</a>.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-74562890651784485022010-12-26T05:10:00.000-08:002010-12-26T05:12:47.970-08:00Aborted connection 31084472 to db: ‘ms’ user: ‘login’ host: `x.x.x.12′ (Got timeout reading communication packets)ERROR Message :<br />081016 23:25:44 Aborted connection 31084472 to db: ‘ms’ user: ‘login’ host: `x.x.x.12′ (Got timeout reading communication packets)<br />081016 23:25:44 Aborted connection 31084391 to db: ‘ms’ user: ‘login’ host: `x.x.x.13′ (Got timeout reading communication packets)<br />081016 23:25:44 Aborted connection 31084488 to db: ‘ms’ user: ‘login’ host: `x.x.x.15′ (Got timeout reading communication packets)<br />081016 23:25:44 Aborted connection 31084509 to db: ‘ms’ user: ‘login’ host: `x.x.x.15′ (Got timeout reading communication packets)<br />081016 23:25:45 Aborted connection 31084518 to db: ‘ms’ user: ‘login’ host: `x.x.x..11′ (Got an error reading communication packets)<br />The below are the Reasons to get the above error :<br />Aborted connections messages appear in the error log, the cause can be any of the following:<br />1) The client program did not call mysql_close() before exiting.<br />2) The client had been sleeping more than wait_timeout or interactive_timeout seconds without issuing any requests to the server.<br />3) The client program ended abruptly in the middle of a data transfer.<br />When any of these things happen, the server increments the Aborted_clients status variable.<br />The server increments the Aborted_connects status variable when the following things happen:<br />1) A client doesn’t have privileges to connect to a database.<br />2) A client uses an incorrect password.<br />3) A connection packet doesn’t contain the right information.<br />4) It takes more than connect_timeout seconds to get a connect packet.<br /><br />-Suresh KunaSuresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-60821161282716142010-12-26T05:07:00.000-08:002010-12-26T05:10:20.780-08:00Fatal error: Can’t change to run as user ‘mysql’ ; Please check that the user exists!Problem Statement:<br /><br />I got the error while starting the server, and is it not able to start as mysql user.<br />It throws the below error<br /><br />080915 19:58:10 mysqld started<br />Fatal error: Can’t change to run as user ‘mysql’ ; Please check that the user exists!<br />080915 19:58:10 Aborting<br /><br />080915 19:58:10 /usr/local/mysql4.1.3/bin/mysqld: Shutdown complete<br /><br />080915 19:58:10 mysqld ended<br /><br />080915 20:03:59 mysqld started<br />Fatal error: Can’t change to run as user ‘mysql’ ; Please check that the user exists!<br />080915 20:03:59 Aborting<br /><br />080915 20:03:59 /usr/local/mysql4.1.3/bin/mysqld: Shutdown complete<br /><br />080915 20:03:59 mysqld ended<br /><br />080915 20:09:07 mysqld started<br />Fatal error: Can’t change to run as user ‘mysql’ ; Please check that the user exists!<br />080915 20:09:07 Aborting<br /><br />080915 20:09:07 /usr/local/mysql4.1.3/bin/mysqld: Shutdown complete<br /><br />080915 20:09:07 mysqld ended<br /><br />080915 20:12:21 mysqld started<br />080915 20:12:21 InnoDB: Started; log sequence number 0 43644<br />/usr/local/mysql/bin/mysqld: ready for connections.<br />Version: ‘4.1.3-beta-standard’ socket: ‘/tmp/mysql3311.sock’ port: 3311<br />080915 20:12:21 Slave SQL thread initialized, starting replication in log ‘hawk-bin.000027′ at position 301237442, relay log<br />‘./menus-relay-bin.000006′ position: 4<br />080915 20:12:21 Slave I/O thread: connected to master ‘replicate@x.x.x.90:3306′, replication started in log ‘hawk-bin.<br />000027′ at position 301237442<br />080915 20:12:21 Error reading packet from server: Client requested master to start replication from impossible position (ser<br />ver_errno=1236)<br />080915 20:12:21 Got fatal error 1236: ‘Client requested master to start replication from impossible position’ from master wh<br />en reading data from binary log<br />080915 20:12:21 Slave I/O thread exiting, read up to log ‘hawk-bin.000027′, position 301237442<br />080915 20:13:07 Aborted connection 4 to db: ‘unconnected’ user: ‘root’ host: `localhost’ (Got an error writing communication<br />packets)<br />080915 20:13:07 /usr/local/mysql/bin/mysqld: Normal shutdown<br /><br />080915 20:13:07 Error reading relay log event: slave SQL thread was killed<br />080915 20:13:07 InnoDB: Starting shutdown…<br />080915 20:13:09 InnoDB: Shutdown completed; log sequence number 0 43644<br />080915 20:13:09 /usr/local/mysql/bin/mysqld: Shutdown complete<br /><br />080915 20:13:09 mysqld ended<br /><br />080915 20:23:30 mysqld started<br />080915 20:23:31 InnoDB: Started; log sequence number 0 43644<br />/usr/local/mysql/bin/mysqld: ready for connections.<br />Version: ‘4.1.3-beta-standard’ socket: ‘/tmp/mysql3311.sock’ port: 3311<br />080915 20:23:31 Slave SQL thread initialized, starting replication in log ‘hawk-bin.000027′ at position 301237442, relay log<br />‘./menus-relay-bin.000008′ position: 4<br />080915 20:23:31 Slave I/O thread: connected to master ‘replicate@x.x.x.90:3306′, replication started in log ‘hawk-bin.<br />000027′ at position 301237442<br />080915 20:23:31 Error reading packet from server: Client requested master to start replication from impossible position (ser<br />ver_errno=1236)<br />080915 20:23:31 Got fatal error 1236: ‘Client requested master to start replication from impossible position’ from master wh<br />en reading data from binary log<br />080915 20:23:31 Slave I/O thread exiting, read up to log ‘hawk-bin.000027′, position 301237442<br />080915 20:34:49 Aborted connection 3 to db: ‘unconnected’ user: ‘root’ host: `localhost’ (Got an error reading communication<br />packets)<br />080915 20:35:56 Error reading relay log event: slave SQL thread was killed<br />080915 20:35:59 Slave SQL thread initialized, starting replication in log ‘hawk-bin.000027′ at position 301237442, relay log<br />‘./menus-relay-bin.000009′ position: 4<br /><br />Solution :<br />The issue was solve by running the server with “sudo - mysql” before the mysql start up script.<br /><br />sudo -u mysql $BASEDIR/bin/mysqld –defaults-file=$DEFAULT –basedir=$BASEDIR –datadir=$DATADIR \<br />–user=mysql –pid-file=$PIDFILE 2>&1Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-71506557635093861042010-12-26T05:04:00.000-08:002010-12-26T05:06:28.332-08:00Could not parse relay log event entry.If a slave show the below error then<br /><br />1) If the slaves relay-log is corrupted, the run the change master to get the slave in sync with master.<br /><br />2) we have to import a fresh dump from master to get the slave in sync with master if the master’s binlog is corrupted.<br /><br />Show slave status :<br /><br />mysql> show slave status\G<br />*************************** 1. row ***************************<br />Master_Host: 10.x.x.39<br />Master_User: replicate<br />Master_Port: 3307<br />Connect_retry: 60<br />Master_Log_File: db-master.077<br />Read_Master_Log_Pos: 7400184<br />Relay_Log_File: db-slave-relay-bin.010<br />Relay_Log_Pos: 150588765<br />Relay_Master_Log_File: db-master.076<br />Slave_IO_Running: Yes<br />Slave_SQL_Running: No<br />Replicate_do_db: broadbandnew,archive<br />Replicate_ignore_db:<br />Last_errno: 0<br />Last_error: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave.<br />Skip_counter: 0<br />Exec_master_log_pos: 150588671<br />Relay_log_space: 157989718<br />1 row in set (0.00 sec)Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-89041584971208179722010-12-26T04:55:00.000-08:002010-12-26T04:56:43.228-08:00Duplicate entry for key 1′ on queryIf the slave show the following error then skip the same as below.<br /><br />mysql> show slave status\G<br />*************************** 1. row ***************************<br />Master_Host: x.x.4.3<br />Master_User: replicate<br />Master_Port: 3307<br />Connect_retry: 60<br />Master_Log_File: master.077<br />Read_Master_Log_Pos: 7394212<br />Relay_Log_File: slave- relay-bin.010<br />Relay_Log_Pos: 150588010<br />Relay_Master_Log_File: master.076<br />Slave_IO_Running: Yes<br />Slave_SQL_Running: No<br />Replicate_do_db: bandnew,archive<br />Replicate_ignore_db:<br />Last_errno: 1062<br />Last_error: Error ‘Duplicate entry ‘122453106710.31.87.80-mdazaz123-2008-10-21 03:38:23′ for key 1′ on query. Default database: ‘bandnew’. Query:<br /><br />‘insert into br_info(UI_ID,UI_USER,CD_CAF,BI_PRODUCT,BI_STIME,BI_ETIME,BI_TIMEUTILIZED,BI_OPENINGBAL,BI_CLBAL,<br /><br />BI_BILLEDAMT,BI_USERIP,BI_SERVICENAME,BI_BYTESIN,BI_BYTESOUT,BI_BYTES,BI_REMARKS,BI_PUBLICIP,BI_SAMIP) values<br /><br />(’122453106710.31.87.80′,’mdz123′,’Default’,'D4NL’,'2008-10-21 03:38:23′,<br /><br />‘2008-10-21 06:10:24′,152,197.09,195.08,’2.01′,’10.31.87.80′,’F',295137740,125414454,420552194,’D',’118.9.4.47′,’210.1.80.3′)’<br />Skip_counter: 0<br />Exec_master_log_pos: 150587868<br />Relay_log_space: 157983746<br />1 row in set (0.00 sec)<br /><br />mysql> set global sql_slave_skip_counter=1;start slave;show slave status\G<br />Query OK, 0 rows affected (0.00 sec)<br /><br />Query OK, 0 rows affected (0.00 sec)<br /><br />*************************** 1. row ***************************<br />Master_Host: x.x.4.3<br />Master_User: replicate<br />Master_Port: 3307<br />Connect_retry: 60<br />Master_Log_File: master.077<br />Read_Master_Log_Pos: 7394212<br />Relay_Log_File: slave- relay-bin.010<br />Relay_Log_Pos: 150588010<br />Relay_Master_Log_File: master.076<br />Slave_IO_Running: Yes<br />Slave_SQL_Running: Yes<br />Replicate_do_db: bandnew,archive<br />Replicate_ignore_db:<br />Last_errno: 0<br />Last_error:<br />Skip_counter: 1<br />Exec_master_log_pos: 150587868<br />Relay_log_space: 157983746<br />1 row in set (0.00 sec)Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-33173924989617917372010-12-26T04:53:00.000-08:002010-12-26T04:54:43.890-08:00Error: log file /mysql/data/mysqld/ib_logfile0 is of different sizeProblemStatement:<br /><br /> I have got the below error while starting the mysql after configuring a new slave with the tall ball of the master data directory.<br /><br />081022 6:24:41 InnoDB: Warning: shutting down a not properly started<br />InnoDB: or created database!<br />081022 6:24:41 /usr/local/mysql4020/libexec/mysqld: Shutdown Complete<br /><br />InnoDB: Error: log file /mysql/data/mysqld/ib_logfile0 is of different size 0 209715200 bytes<br />InnoDB: than specified in the .cnf file 0 104857600 bytes!<br />081022 6:24:42 Can’t init databases<br />081022 6:24:42 Aborting<br /><br />Solution :<br /><br />The same has been resolved by changing the below variable in the cnf file as same in the master config file <br /><br />innodb_log_file_size = 200MSuresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-47205954264904346292010-12-26T04:50:00.000-08:002010-12-26T04:52:37.155-08:00Error: ‘Access denied for user ‘replicate’@'10.0.0.8′ (using password: YES)’Problem Statement:<br /><br />we got the below error message while setting the replication.<br /><br />Error: ‘Access denied for user ‘replicate’@'10.0.0.8′ (using password: YES)’.<br /><br />Solution :<br /><br /><18:09:58>root@6.83: ~#mysql -u root -p -h10.0.0.55 -P3306<br /><br />If the error is password error then edit the password error.<br />Error: ‘Access denied for user ‘replicate’@'10.0.0.8′ (using password: YES)’<br /><br /><18:09:58>root@6.83: ~#vi /etc/my.cnf (To change password, edit the cnf file)<br /><18:09:58>root@6.83: ~#less /data/mysqldata/err.log<br /><br />mysql> change master to master_password=’xxxxx’;<br /><br />mysql> change master to master_password=’replicate’,master_user=’replicate’,master_host=’10.0.0.55′,master_port=3306,master_log_file=’narnia-bin.000030′,master_log_pos=271266680;<br /><br />mysql> stop slave;<br />mysql> start slave;<br /><br />mysql> show slave status\G<br />*************************** 1. row ***************************<br />Slave_IO_State: Connecting to master<br />Master_Host: 10.0.0.55<br />Master_User: replicate<br />Master_Port: 3306<br />Connect_Retry: 60<br />Master_Log_File: arnia-bin.000030<br />Read_Master_Log_Pos: 271260976<br />Relay_Log_File: relay-bin.000004<br />Relay_Log_Pos: 4<br />Relay_Master_Log_File: arnia-bin.000030<br />Slave_IO_Running: Yes<br />Slave_SQL_Running: Yes<br />Replicate_Do_DB:<br />Replicate_Ignore_DB:<br />Replicate_Do_Table: mail.appearance_new,pop.hostcnt<br />Replicate_Ignore_Table:<br />Replicate_Wild_Do_Table:<br />Replicate_Wild_Ignore_Table:<br />Last_Errno: 0<br />Last_Error:<br />Skip_Counter: 0<br />Exec_Master_Log_Pos: 271266680<br />Relay_Log_Space: 4<br />Until_Condition: None<br />Until_Log_File:<br />Until_Log_Pos: 0<br />Master_SSL_Allowed: No<br />Master_SSL_CA_File:<br />Master_SSL_CA_Path:<br />Master_SSL_Cert:<br />Master_SSL_Cipher:<br />Master_SSL_Key:<br />Seconds_Behind_Master: NULL<br />1 row in set (0.00 sec)Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-12457479248825073382010-12-26T04:49:00.000-08:002010-12-26T04:50:10.794-08:00Got fatal error 1236: ‘Client requested master to start replication from impossible position’ from master when reading data from binary logProblem statement:<br /><br />The below is problem I faced in a well running replication system.<br /><br />Error Message :<br />081016 18:34:43 Error reading packet from server: Client requested master to start replication from impossible position (ser<br />ver_errno=1236)<br />081016 18:34:43 Got fatal error 1236: ‘Client requested master to start replication from impossible position’ from master when reading data from binary log<br />081016 18:34:43 Slave I/O thread exiting, read up to log ‘lot-bin.000065′, position 13664242<br />081016 18:35:04 Error reading relay log event: slave SQL thread was killed<br />081016 18:45:40 Slave SQL thread initialized, starting replication in log ‘lot-bin.000065′ at position 13664242, relay log’./mx5-relay-bin.000001′ position: 13664284<br />081016 18:45:40 Slave I/O thread: connected to master ‘replicate@10.0.4:3306′, replication started in log ‘lot-bin.<br />000065′ at position 13664242<br /><br />Solution :<br />1) If the value of Read_Master_Log_Pos has a normal value, and slave asked weird position, this could be some internal bug. Now I see only a single way, but it could be difficult: if you have –log-slave-updates, use slave binary logs to find the last executed statement and it’s position in master logs, and CHANGE MASTER to that position.<br />2) Go onto the master execute<br />SHOW MASTER STATUS<br />Look at the output and find the log that the slave is pointing to. Look at the File size field.<br />Next look at the slave output from the slave reporting the issue. Look at Exec_Master_Log_Pos, is that value greater then the File Size on the master if so issue<br />CHANGE MASTER TO MASTER_LOG_FILE=[NEXT FILE], MASTER_LOG_POS=4;<br />slave start;Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-30377573156020793482010-12-26T04:47:00.000-08:002010-12-26T04:48:50.747-08:00ERROR 1251: Client does not support authentication protocolProblem Statement:<br /><br />I got the below error while trying to accessing the upgraded mysqld server with a old client.<br /><br />Error :<br /><br />ERROR 1251: Client does not support authentication protocol<br />requested by server; consider upgrading MySQL client<br /><br />Solution :<br /><br />I have resolved the above my updating the password as below.<br /><br />update user set password=old_password(’new_password’) where user=’user_name’ and host=’10.0.0.2′;<br /><br />flush privileges;<br /><br />and then try to access the mysqld server and now you will be able to connect to the server succesfully.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-51685497235965206082010-12-26T04:43:00.000-08:002010-12-26T04:46:08.278-08:00Can’t get CREATE TABLE (Can’t open file: ‘Tory_Map.MYI’. (errno: 145))Error Message :<br />/usr/local/mysql/bin/mysqldump: Can’t get CREATE TABLE for table `Tory_Map` (Can’t open file: ‘Tory_Map.MYI’. (errno: 145))<br /><br />Solution :<br />1) Do a show table status as below:<br />mysql> show table status like ‘Tory_Map’\G<br />*************************** 1. row ***************************<br />Name: Tory_Map<br />Type: NULL<br />Row_format: NULL<br />Rows: NULL<br />Avg_row_length: NULL<br />Data_length: NULL<br />Max_data_length: NULL<br />Index_length: NULL<br />Data_free: NULL<br />Auto_increment: NULL<br />Create_time: NULL<br />Update_time: NULL<br />Check_time: NULL<br />Create_options: NULL<br />Comment: Can’t open file: ‘Tory_Map.MYI’. (errno: 145)<br />1 row in set (0.00 sec)<br /><br />2) and then do a check table as below.<br />mysql> check table Tory_Map;<br />+——————+——-+———-+——————————-+<br />| Table | Op | Msg_type | Msg_text |<br />+——————+——-+———-+——————————-+<br />| doc.Tory_Map | check | warning | Table is marked as crashed |<br />| doc.Tory_Map | check | error | Found 3670180 keys of 3670179 |<br />| doc.Tory_Map | check | error | Corrupt |<br />+——————+——-+———-+——————————-+<br />3 rows in set (20 min 1.65 sec)<br /><br />3) If the table is marked as crashed then do a repair of that table as below.<br />mysql> repair table Tory_Map;<br /><br />+——————+——–+———-+———-+<br />| Table | Op | Msg_type | Msg_text |<br />+——————+——–+———-+———-+<br />| doc.Tory_Map | repair | status | OK |<br />+——————+——–+———-+———-+<br />1 row in set (1 min 42.77 sec)<br /><br />and if you do a check table now, the result has to be ok as below.<br />mysql> check table Tory_Map;<br />+——————+——-+———-+———-+<br />| Table | Op | Msg_type | Msg_text |<br />+——————+——-+———-+———-+<br />| doc.Tory_Map | check | status | OK |<br />+——————+——-+———-+———-+<br />1 row in set (6 min 3.33 sec)Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com1tag:blogger.com,1999:blog-3118333704316399201.post-58057478586619217692010-12-26T04:42:00.001-08:002010-12-26T04:43:54.058-08:00Replication resumed, Apparent master shutdown, Failed reading log eventThe slave server is getting this error in error log.<br /><br />081102 0:30:18 [Note] Slave: connected to master ‘replicate@10.0.1.5:3310′,replication resumed in log ‘central3310.000142′ at position 789529654<br />081102 0:30:18 [Note] Slave: received end packet from server, apparent master shutdown:<br />081102 0:30:18 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log ‘central3310.000142′ position 789529654<br /><br />Solution :<br /><br />The problem has started when a new slave has been configured for the existing setup. The slave server-id is matching exactly with another slave server-id, so the replication is happening for some time and then stopping. Finally one server is getting crashed. Then stop one of the slaves, change the server-id and start the slave to rectify the same.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com1tag:blogger.com,1999:blog-3118333704316399201.post-34726070213478406302010-12-26T04:40:00.000-08:002010-12-26T04:41:26.073-08:00Error: data file /mysql/data/mysqld/ibdata1 is of a different sizeIf the Mysql is not starting and the error log consists of the below error after the configuration of a slave with the tar file of master.<br /><br />081022 6:19:02 InnoDB: Warning: shutting down a not properly started<br />InnoDB: or created database!<br />081022 6:19:02 /usr/local/mysql4020/libexec/mysqld: Shutdown Complete<br /><br />InnoDB: Error: data file /mysql/data/mysqld/ibdata1 is of a different size<br />InnoDB: 128000 pages (rounded down to MB)<br />InnoDB: than specified in the .cnf file 65536 pages!<br />InnoDB: Could not open or create data files.<br />InnoDB: If you tried to add new data files, and it failed here,<br />InnoDB: you should now edit innodb_data_file_path in my.cnf back<br />InnoDB: to what it was, and remove the new ibdata files InnoDB created<br />InnoDB: in this failed attempt. InnoDB only wrote those files full of<br />InnoDB: zeros, but did not yet use them in any way. But be careful: do not<br />InnoDB: remove old data files which contain your precious data!<br />081022 6:19:04 Can’t init databases<br />081022 6:19:04 Aborting<br /><br />Then change the below variable as the same in the master cnf file<br /><br />innodb_data_file_path =ibdata1:2000M;ibdata2:100M:autoextend<br />and start the mysqld server.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-26952438263663762872010-12-26T04:36:00.000-08:002010-12-26T04:38:48.491-08:00Lost connection to MySQL serverThis type of error may occur in the below scenarios.<br /><br />1) It indicates network connectivity trouble. If the error message includes “during query” then we have to look at the query and the number of rows being sent.<br />Then try increasing “net_read_timeout” variable from its default value.<br />2) It can happen when the client is attempting the initial connection to the server. If your “connect_timeout” value is set to only a few seconds, you can increase the same for resolving this issue.<br />3) It can occur when using BLOB values that are larger than “max_allowed_packet” and may get “packet too large” as error, then increase the “max_allowed_packet” value to resolve the same.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-83606686641144610772010-12-26T04:35:00.000-08:002010-12-26T04:36:46.228-08:00show procedure status or mysqldump with routines causing mysqld restart in 5.1.40 versionWhen we take a mysqldump with routines or show procedure status where Db=’dbname’;<br /><br />The mysqld server got an error like “Lost connection to mysqld server”, when we login into server the will be running and it got restarted.<br /><br />To fix this, upgrade the db by using mysql_upgrade and it got fixed for me.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-13737257922169980792010-12-26T04:31:00.000-08:002010-12-26T04:34:00.886-08:00MySQL Cluster Query ExecutionMySQL Cluster executes the queries depending on the below type of scan’s it does<br /><br /> 1. Primary key lookup<br /> 2. Unique key lookup<br /> 3. Ordered index scan (i.e., non-unique indexes that use T-trees)<br /> 4. Full table scan<br /><br />Let’s say you have 4 data nodes in your cluster (NoOfReplicas=2). This means you have 2 node groups and each one has half the data. Cluster uses a hash on the primary key (unless you’ve controlled the partitioning using the 5.1 partitioning features). So for any table, half the rows are in one node group and half the rows are the in other node group.<br /><br />Now for the 4 types of query execution. You can verify which type of execution is used with EXPLAIN. Here’s how each ones works:<br /><br /> 1. Primary key lookup - the MySQL server can calculate the hash on the primary key and know which data node the relevant row resides in. Then the MySQL server contacts that data node and receives the row.<br /> 2. Unique key lookup - the MySQL server cannot know which data node the row might be stored in. So it contacts a pseudo-random node. That data node has a hidden table that maps the unique key values to the primary key. Then the hash on the primary key reveals where the row resides.<br /> 3. Ordered index scans are sent in parallel to all data nodes, where they search their local t-tree.<br /> 4. Full table scans are send in parallel to all data nodes, where they scan their primary fragment.<br /><br />Summary: primary key lookups are best. If you have more than 2 nodes, throughput goes up because all nodes are actively serving different fragments of data. Ordered index lookup and full table scans are done in parallel, so more nodes leads to better performance.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-59554287985247332242010-01-03T21:26:00.000-08:002010-01-03T21:44:14.184-08:00MONyog MySQL Monitor and AdvisorMONyog MySQL Monitor and Advisor helps to manage more MySQL servers, tune current MySQL servers and fix problems with their MySQL database applications before becoming serious problems.MONyog pro-actively monitors enterprise database environments and provides expert advice on how even those new to MySQL can tighten security, optimize performance and reduce downtime of their MySQL powered systems.<br />As a MONyog user you will know in advance if server resources will soon be running short. You can plan a hardware replacement or upgrade in comfortable time and not when it is almost too late (or just plain too late!). And the metrics about the server that MONyog gives you will help to decide what to upgrade: Faster CPU, More RAM, Faster disc systems - or maybe only remove another program from the computer where the MySQL server is running.<br /><br />MONyog provides a log analysis module, a "query sniffer" and a "processlist watcher" that makes it easy to identify the statements and applications that run slow on the server. This includes various filtering and EXPLAIN options for an efficient workflow when you want to optimize your applications. To use the MONyog mail alert functionality to mail you alerts about a MySQL server you will first need to select that feature when registering a server. You can choose to receive mails, not only when there are problems but also to notify that the system is running smoothly at regular intervals. Further, you can set the minimum time interval between mails sent by MONyog to suit your needs and to avoid cluttering your mailbox. For every MySQL server choose a setting depending on how important the performance of this particular server is.<br /><br />MONyog MySQL Monitor and Advisor Features<br />Agent-less architecture<br />Get up and running in less than 1 minute!<br />No installation required on servers<br />Install on a single machine and monitor any no. of servers<br />No hassles of upgrading agents on all server<br />Server Optimization<br />Completely new set of Advisors<br />Monitoring Tools for MySQL Community and Enterprise Servers<br />History/Trend Analysis<br />Quick Performance Tuning and Optimization<br />Analyze MySQL and OS performance data collected over a period of time<br />Error Log Monitoring New!<br />Find Problem SQL<br />Querying MySQL Proxy<br />Analyzing General Query Log<br />Analyzing Slow Query Log<br />Issuing SHOW PROCESSLIST at regular intervals<br />Additionally all the above reports can be exported as CSV New!<br />Multi-platform Support<br />Monitor MySQL on any platform<br />Available on Windows and Linux<br />Support for both 64-bit and 32-bit Linux<br />Both RPM and Non RPM Package<br />Security<br />Detect MySQL hacking attempts<br />Identify and fix security vulnerabilities<br />User Friendly GUI<br />AJAX interface<br />Simple Descriptions for all performance metrics. Helps you learn MySQL Internals<br />Fully customizable using java script and MONyog Object Model<br />Compare unlimited MySQL servers side-by-side on Real time Dashboard<br />Enterprise dashboard helps identify problems quickly<br />Monitor Replication<br />Get alerts when replication lags behind or if slave has stopped<br />Replication Log details for replication trouble shooting<br />Miscellaneous Features<br />Shows Processlist displaying information on running queries as per execution time<br />Inbuilt web-server<br />Does not force to install multiple web-servers & language runtimes like other tools<br />SSH Tunneling Support<br />Get proactive alerts via email and SNMP Traps New.<br /><br />Suresh Kuna<br />MySQL DBASuresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-51046834982826082252009-12-30T03:02:00.000-08:002009-12-30T03:06:21.229-08:00SHOW INNODB STATUS walk throughThe below URL explains the SHOW INNODB STATUS in detail<br />http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/<br /><br />Suresh KunaSuresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-45913428030724062402009-12-29T23:09:00.000-08:002009-12-29T23:11:49.958-08:00What is Ajax ?AJAX isn't a technology, or a language, and there's no recipe to implement it; it's just a combination of various components to achieve something you otherwise couldn't: asynchronous http requests. However, since early 2005, when Google and Flickr popularized the concept, its use has grown rapidly.<br /><br />The name AJAX is short for Asynchronous JavaScript and XML. It uses the JavaScript XMLHttpRequest function to create a tunnel from the client's browser to the server and transmit information back and forth without having to refresh the page. The data travels in XML format because it transmits complex data types over clear text.<br /><br />AJAX uses XHTML for the data presentation of the view layer, DOM, short for Document Object Model, which dynamically manipulates the presentation, XML for data exchange, and XMLHttpRequest as the exchange engine that ties everything together.<br /><br />Because of these requirements, AJAX works on I.E. 5.0+, Mozilla 1.0+, Firefox 1.0+, Netscape 7.0+, and Apple added it to Safari 1.2+.<br /><br />Traditional HTML sends a request to the server, which processes it and either returns a static HTML page or dispatches the request to some scripting language such as ColdFusion, which creates and returns an HTML page for the browser to render. When this method has to retrieve new data from the server it has to repost and reload another HTML file. In many cases perhaps only a small portion of the returned HTML code varies and the shell itself remains the same resulting in huge overhead because the data has to be downloaded every time.<br /><br />Some classic examples of applications that would benefit from AJAX are searching for information and displaying it back in a table, related select dropdowns, or checking if a user exists before submitting an entire form.<br /><br />The below url contains a detail implementation of how it works.<br />http://ajax.sys-con.com/node/138966Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-32101822378225552022009-12-27T23:54:00.000-08:002009-12-28T00:06:18.415-08:00DBConvert for Oracle and MySQLFree tools for the Database conversion to Mysql.<br />http://www.zdnetasia.com/downloads/pc/swinfo/0,39043052,39654405s,00.htm<br />and http://dev.mysql.com/downloads//gui-tools/5.0.html<br />The following are some of the key features of the MySQL Migration Toolkit:<br /><br /> * The MySQL Migration Toolkit supports a variety of source database systems, including the following:<br /> o Oracle<br /> o Microsoft SQL Server<br /> o Microsoft Access<br /> o Sybase<br /> o MySQL<br /> <br /> * The MySQL Migration Toolkit is fully customizable through its Java runtime interface. Advanced users can use Java to perform custom data and schema transformations.<br /> * The MySQL Migration Toolkit supports agent-based data migrations, with the MySQL Migration Toolkit residing on a separate machine than the source and target database servers, through the use of a special migration agent. The agent-based data migration improves migration performance by allowing data to be transferred directly from the source machine to the target machine without being routed through the MySQL Migration Toolkit.Suresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com1tag:blogger.com,1999:blog-3118333704316399201.post-20998224506704024722009-12-23T00:26:00.000-08:002009-12-23T00:27:58.027-08:00Feature enhancements to MySQL ClusterHow important / useful would the following potential enhancements to MySQL Cluster be ?<br /><br />Running MySQL Cluster on Windows in a production environment<br />Running MySQL Cluster within virtual machines in a production environment<br />Improved performance for complex joins<br />Support for Foreign Keys<br />More than 128 columns per row<br />More than 8 Kbytes of data (excluding BLOBs) per row<br />Ability for indexes to be stored on disk rather than always in memory<br />Improved performance for asynchronous (Geographic) replication (This does *not* refer to the synchronous replication between data nodes within a single Cluster).<br />Disk-durable commits - the ability to specify that a transaction does not commit until the in-memory change has been persisted to disk (Latency would be impacted)<br />Integrate MySQL Cluster backups with Enterprise backup solutions (e.g. Tivoli, NetBackup, HP DataProtector)<br />XA Distributed commit<br />Support for application and data nodes to run on machines with different endian architectures<br />Allow MySQL Cluster to continue operations after a full node-group has been lost (Depending on configuration, a limited number of rows in each table would be unavailable when a full node group is lost)<br />Allow effective use of more than 8 cores/hardware threads for a single data node<br />On-line drop column<br />Meta data (views, privileges etc.) shared between MySQL Servers<br />Enhanced asynchronous (geographic) replication management<br />MySQL Enterprise Monitor support for MySQL Cluster (including data nodes)<br />Extend conflict resolution for asynchronous (geographic) replication to include deletes and inserts<br />Extend conflict resolution for asynchronous (geographic) replication to remove requirement for application to manage Timestamps.<br />Support for the Service Availability Framework (SAF)<br />NDB API thread contention<br />2 x Data Node (1 node group) configuration where application selects which data node should be responsible for each row (e.g. all rows active on the same data node)<br />Variable sized VARCHARS on disk<br />Improve OPTIMIZE TABLE command<br />Enhanced memory use reporting<br />Savepoints<br />Support Map Reduce<br />Increase security by allowing table data to be transparently encrypted and the key kept secure.<br />SNMP support for traps<br />SNMP support for management<br />Backup a single database<br />Filtering of NDB API events<br />Abort transactions when disks become overloaded in order to avoid data nodes crashingSuresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0tag:blogger.com,1999:blog-3118333704316399201.post-27476629095332561762009-12-23T00:10:00.000-08:002009-12-23T00:18:55.620-08:00MySQL Storage EnginesArchive<br />BerkeleyDB<br />Blackhole<br />CSV<br />Example<br />Falcon<br />Federated<br />InfoBright<br />InnoDB<br />Kickfire<br />Maria<br />Memory<br />Merge<br />MyISAM<br />NDB<br />PBXT<br />TokuDB<br />Custom Storage EngineSuresh Kunahttp://www.blogger.com/profile/03528300969127611324noreply@blogger.com0