-
May 5th, 2013, 09:43 AM
#1
Thread Starter
Hyperactive Member
MySQL 2nd reading speed change
I'll admit I'm still very new to SQL (linux, not MSSql)
My database if very large.
I notice that when i access a row the first time, it's slow.
but when i access it again, it's quick.
because my database is used for gaming, and it's very large, i'm looking at the reason behind this to speed it up.
is there Caching involved?
I noticed there is some improvement by narrowing the search parameter for the row
(for example, here i refer to just the ID of the row instead of all of the values involved in the initial search)
Code:
....
If ($count == 0 )
{
$sql = "INSERT INTO $LandTable (West,North,Type1) VALUES ($West,$North,'$Char')";
mysql_query($sql,$LandCon) or die("error 109 :[$sql]: " . mysql_error());
}
else
{
$info = mysql_fetch_array($result);
$ID = $info['ID'];
$sql = "UPDATE $LandTable SET Type1=$Char WHERE ID = $ID ";// WHERE North = $North AND West=$West";
$result=mysql_query($sql,$LandCon) or die( "error 110: $Char ".mysql_error());
}
echo "SUCCESS DIG ";
}
Any other pointers you can give on how to speed up mySql without having to recode my entire game engine would be greatly appreciated.
EDIT:
more info regarding the particular database in question
size = 3.1 MiB overhead= 450.9 KiB
Last edited by Waxy; May 5th, 2013 at 09:56 AM.
----------------------------------------------------
Missing the days of GWBasic 1.1
WaxyStudios.com
-
May 6th, 2013, 10:19 AM
#2
Frenzied Member
Re: MySQL 2nd reading speed change
How large is this table? Are all the fields indexed? If you're using these queries often, PDO may suit you better. You can prepare one statement and execute it over and over with different parameters which may net you some increased performance.
Also, when you say slow, how slow? Are you sure it's at the point where you're inserting into this table?
Justin
-
May 6th, 2013, 11:21 PM
#3
Re: MySQL 2nd reading speed change
Maybe its the images and other resources of your page are causing the loading issue. Since you said that it takes some time to load it first and the rest are loaded fast!
Browsers do some caching of pages. So, I think the first time it loads directly from the server and the second time onwards the image and other resources are loaded from the cache! See if that's the issue or its really the queries that is taking time.
If my post was helpful to you, then express your gratitude using Rate this Post.
And if your problem is SOLVED, then please Mark the Thread as RESOLVED (see it in action - video)
My system: AMD FX 6100, Gigabyte Motherboard, 8 GB Crossair Vengance, Cooler Master 450W Thunder PSU, 1.4 TB HDD, 18.5" TFT(Wide), Antec V1 Cabinet
Social Group: VBForums - Developers from India
Skills: PHP, MySQL, jQuery, VB.Net, Photoshop, CodeIgniter, Bootstrap,...
-
May 8th, 2013, 01:17 PM
#4
Addicted Member
Re: MySQL 2nd reading speed change
-
May 9th, 2013, 08:17 AM
#5
Thread Starter
Hyperactive Member
Re: MySQL 2nd reading speed change
any caching would not be HTML.
the script is used by a vb.net 2010 app. all images are loaded into the ram on the app's startup.
the database's table:
more info regarding the particular database in question
size = 3.1 MiB overhead= 450.9 KiB
just one table.
an example - in the game - a player places a new land tile at specified X, Y coordinates in their game.
here are the calls:
(psudo code)
Code:
SQL Search $table-for-land tile at X,Y (by trying to retrieve the tile number)
if land tile at X,Y exists then
{
SQL Update X,Y make tile = new tile
}
else
{
SQL Insert X,Y, Tile
}
so up to two calls to that table when it's needed.
the delay would be as high as 4 seconds.
(that is THE MOST the table is used, there are some lesser uses for it but that is the most )
however - (regardless of what the previous call to that x,y was doing)
if the player's MOST PREVIOUS SQL Action was at that same X,Y and used the same table then
the delay would be cut by MORE than 1/2
So a player could use a "Look" function to get the description of the land tile at that location.
regardless of if there is a value at that X,Y or not, if they THEN tried to build at that X,Y then it would be fast.
likewise if they changed that same X,Y tile value multiple times consecutively.
first call slow, successive calls fast until they switch to a different X,Y.
(no, the properties of those coordinates are not stored in the local app. to reduce hacking all checks are done sever side. So app cache is not the culprit)
If you're using these queries often, PDO may suit you better.
wow. it would really bite if i had to change my API. it would take me a week to go through that much code.
it's not just one script that would need to be changed to have a universal API. it's a site that includes both PHP webpages, PHP http application scripts (that the VB.NET app calls), and PHP scripts to interact between the two.
i experimented by resetting this Land table to empty
(the php script finds nothing at X,Y and tells the app "just use tile number n")
though that solved the problem for now, later on when players customized the ground - or bombs make numerous scorched earth entries in the land tiles - the problem may resurface.
(i try to remain forward thinking)
about you queries its so important when u have a large DB (my opinion any DB over 1mb is large) , first of all partitioning on the start And more important thing in "SELECT" query dont select if you dont need everything dont use * and dont use too many condition
partitioning? i only know about disk partitioning. if you could recommend a link regarding SQL Partitioning i'd appreciate it.
the most extensive condition i use is ( where X = n AND Y = n )
the smaller tables used for other purposes may be more subjected to conditions - but not more than 3 ( eg- where X = n AND Y = n AND Item != n )
but those tables should always remain under 1MB in size (in theory) and after loading them up for a stress test it remained tolerable.
Are all the fields indexed?
i'll have to google what "indexed" is.
they all have an auto-incrementing ID field if that's what you mean.
(warned you i was new to this LOL)
----------------------------------------------------
Missing the days of GWBasic 1.1
WaxyStudios.com
-
May 9th, 2013, 09:03 AM
#6
Re: MySQL 2nd reading speed change
It sounds like Indexing is probably the issue. As the second usage of a record is fast, it implies that the database system has cached a reference to it.
Indexes allow the database system to do the initial finding of records more quickly, by storing some meta-data about the data (much like the index you at the back of a reference book). Depending on your data and the indexes you use, the speed increase can be immense.
It isn't all good news tho, as they they increase the database size and memory usage a little (ignorable in your case), and make adding/editing/deleting values in the relevant columns slower (my educated guess is that you will benefit overall, especially if you don't often edit the X/Y values or add/delete records).
Don't index everything, just the fields that are needed most often in Where clauses (I presume just X and Y).
As for how you set them up, I can't help with the details as I don't use MySQL.
One thing to think about is whether the X and Y values have a finite range (eg: will X always be between 0 and 999?), because if so you could combine the X and Y values into one field (eg: X+(Y*1000) ), which would enable the indexes to be even faster and the data/memory usage to be smaller.
-
May 10th, 2013, 10:12 AM
#7
Addicted Member
Re: MySQL 2nd reading speed change
-
Jun 1st, 2013, 03:27 AM
#8
Re: MySQL 2nd reading speed change
Late to the party, but still: If you're running MySQL on Linux, you get caching for free, because Linux will cache disk pages in RAM.
It sounds like there is something badly wrong with your storage system. Four seconds to look up a record in a 3MB table is very poor. Indexing or no indexing, you should be able to read 3MB into RAM in no time at all. Is the storage perhaps on a different machine to the database server?
Also, an SQL database probably isn't the right type of storage for this at all. Si's suggestion makes sense: define boundaries for X and Y, and then create a finite-sized grid which covers all of those possible values. This lets you index into the grid instantly, even if your storage subsystem is especially laid-back.
Last edited by penagate; Jun 1st, 2013 at 03:30 AM.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|