I have created a table in mysql with following query.
CREATE TABLE `latlong` (
`id` int(10) NOT NULL auto_increment,
`lat` decimal(12,9) NOT NULL,
`lng` decimal(12,9) NOT NULL,
`place_name` varchar(253) NOT NULL,
PRIMARY KEY (`id`),
KEY `lat` (`lat`),
KEY `lng` (`lng`),
KEY `id` (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=800001 DEFAULT CHARSET=latin1;
I made the similar structure in PostgreSQL too.
CREATE TABLE latlong
(
id integer NOT NULL,
lat numeric(12,9) NOT NULL,
lng numeric(12,9) NOT NULL,
place_name character varying NOT NULL,
CONSTRAINT latlong_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE latlong OWNER TO postgres;
-- Index: lat
-- DROP INDEX lat;
CREATE INDEX lat
ON latlong
USING btree
(lat);
-- Index: lng
-- DROP INDEX lng;
CREATE INDEX lng
ON latlong
USING btree
(lng);
Step 2 ) I entered 70 lakhs of random data to the table using a stored procedure.
In that i kept 1 lakh of duplicate data for testing purpose.
After that, i exported the mysql data to csv and imported to Postgresql.
Now both the table has same data and structure.
Then i ran the query on both and the time differences were amazing.
select count(*) from
(select max(id),count(id) from latlong group by lat,lng
having count(id) >1
) as tmp;
The result was 99998 in both tables.
Time taken for the outputs are,
Mysql > 29.297 seconds
PostgreSQL > 7.922 seconds
In the above test, PostgreSQL showed better performance than MySql.
No comments:
Post a Comment
Please share your thought about this