PHP script execution time, Too long or never mind?.. |
PHP script execution time, Too long or never mind?.. |
Dag |
Sep 10 2011, 05:10 AM
Post
#1
|
Advanced Member Group: Members Posts: 107 Joined: 24-October 06 Member No.: 549 |
CODE $tStart=microtime(); some php code here... CODE $tEnd=microtime(); $aA=explode(' ',$tStart.' '.$tEnd); $tSeconds=sprintf('%01.8f seconds',($aA[2]+$aA[3])-($aA[0]+$aA[1])); print "<p><i>Script did all in ".$tSeconds."</i></p>\n"; I have script with average time of about 25 seconds... should I take it as "This script need optimization for sure!" or not? Dag |
Brian Chandler |
Sep 10 2011, 09:22 AM
Post
#2
|
Jocular coder Group: Members Posts: 2,460 Joined: 31-August 06 Member No.: 43 |
Well, it all depends what the program does, doesn't it?!
Most "ordinary" web programming doesn't do anything computationally intensive, but perhaps yours does. What is it? |
Dag |
Sep 10 2011, 10:10 AM
Post
#3
|
Advanced Member Group: Members Posts: 107 Joined: 24-October 06 Member No.: 549 |
Hi there. Still in Japan? Untouchable?
Have you become the specialist? If yes, u can help for sure. I am opening and reading about 200 hundred XML files one by one and collecting (with some processing) their data. I am not satisfied with next: 1. CODE $getID = "|<tag_name>(.*?)<\/tag_name>|"; preg_match_all($getID, $fileContent, $arrID); because data per each file are: 'goodID' 'goodID' 'badID' 'goodID' 'badID' * * * I am cleaning all of that mess later by using if else. If I found way to exclude them earlier, it will spare some time for sure. 2. Next one is probably the biggest problem: I have 2 arrays with about 1000 members per each and I am passing through them replacing each ID because of sorting later. About 4000 IDs per each months are in the game so far... Do u have any experience with all of this? Dag |
Darin McGrew |
Sep 10 2011, 11:34 AM
Post
#4
|
WDG Member Group: Root Admin Posts: 8,365 Joined: 4-August 06 From: Mountain View, CA Member No.: 3 |
Have you timed the two halves of the program to see how much time it's spending on each?
Are you actually sorting anything, or merely preparing data to be sorted? If you're sorting something, then what sorting function/algorithm are you using? How often does this 25-second page get viewed? How often does the data it's using change? |
Dag |
Sep 10 2011, 12:36 PM
Post
#5
|
Advanced Member Group: Members Posts: 107 Joined: 24-October 06 Member No.: 549 |
Ugh, how many tough and right questions! Sure that I’d start from them, not by writing the code by fingers. But I was in the hurry.
Thanks Darin. I’ll answer and then I’ll describe what exactly I am doing. QUOTE Have you timed the two halves of the program to see how much time it's spending on each? Nope but I’ll make it right now and post here. I have few critical moments (extra large arrays if they are the problem). QUOTE Are you actually sorting anything, or merely preparing data to be sorted? If you're sorting something, then what sorting function/algorithm are you using? Both: first I am getting ID numbers and creating new array with that numbers as keys. Using ksort only after all is done. QUOTE How often does this 25-second page get viewed? How often does the data it's using change? I understand. Caching data will have the place for sure but any single change in XML (and that will be the case) will push me on the calc and analyze again. I am basicly talking about script that I am using not online but on my computer only. Cached page I can post online. Now all details: We have production of 3 types of goods and waste as separate good (4th type) in 2 locations with third in the near future. The base document or tool is our Work Production Order. Few per day. Goods quantity per location is approiximately 4,500 (in the third one will be more… probably twice) per month. Each Work Other is created in special production program and exported to XML. Sometimes, I am finding the mistakes in program and/or XML file so that’s the why we need to edit them. That XML is the main file which I am using to import data to our Information Data Base. I also want to analyze them and to have all their data for managerial planning (much more convinient then IDB by itself). I am making monthly production reports. Let’s take the July: final result is: 70 XML files (Work Orders), 4,279 articles. Script way: 1. I am listing directory, checking out it for file presence and file types and names. 2. Now I am opening first file and getting from it ID and quantity. Each work order have 1 or more pcs per one ID. There are no any work order with all IDs included. Waste is presented in each one (also by ID and quantity). 3. Now, when I’ve got IDs and Qs, I am linking them (taking third var/data) to their corresponding S(sort) number(s) (IDs are letters and are not constructed for proper sorting while S numer is constructed well: s0001, s0002, s0003 etc.). That array has 1,316 members . Then, I am taking the price per each one (another array with 1,316 members). Some prices are in the EUR, some in the RSD. Average Exchange rate used. Some with VAT, some without. When all is in RSD with VAT, I am multiplying price by quantity. And, at the end, I am adding the link on article itself. 4. Now, in the same loop, I am creating array with S num as key and everything else as value separated by ‘|' so it looks like next: CODE Array ( [s0118] => UVWW006|1|25460|25460|/katalog/_?id=s118 [s0096] => KVWW002|1|19760|19760|/katalog/_?id=s096 [s1317] => 0299|10|52|520| ) After ksort (arr) I have proper s096, s0118 etc. 5. Now I am taking out all values to string CODE foreach ($tmp as $key => $val) { $data = explode('|',$val); $id = $data[0]; $q = $data[1]; etc. 6. That’s all. I have small header and footer per each Working Order and one at the very end of report with accumulated values (quantity monthly and money total amount). I did my best. If anything is not clear enough, please let me know. I will split arrays on three parts: it will make execution faster I think. |
Brian Chandler |
Sep 10 2011, 01:31 PM
Post
#6
|
Jocular coder Group: Members Posts: 2,460 Joined: 31-August 06 Member No.: 43 |
I am opening and reading about 200 hundred XML files one by one and collecting (with some processing) their data. I am not satisfied with next: 1. CODE $getID = "|<tag_name>(.*?)<\/tag_name>|"; preg_match_all($getID, $fileContent, $arrID); because data per each file are: 'goodID' 'goodID' 'badID' 'goodID' 'badID' * * * I am cleaning all of that mess later by using if else. If I found way to exclude them earlier, it will spare some time for sure. I don't quite understand this "goodID"/"badID" bit. As Darren says, you need to work out which bit of your program is taking time, and also predict which bit is going to be slowest when you have a lot more data than now. I think preg_match_all is very general and thus likely to be much slower than using the more basic functions strstr() etc, IF you are looking for a fixed string. What is "tag_name" exactly? |
Dag |
Sep 10 2011, 10:49 PM
Post
#7
|
Advanced Member Group: Members Posts: 107 Joined: 24-October 06 Member No.: 549 |
I don't quite understand this "goodID"/"badID" bit. * ** I think preg_match_all is very general and thus likely to be much slower than using the more basic functions strstr() etc, IF you are looking for a fixed string. What is "tag_name" exactly? I have two ID types and only one is needed for final analyzes. One type is 7 characters long (needed one) and the other one is 8 character long. Looks like next: PZWW001 PZWW002 PZWW003 dPZWW001 dPZWW002 dPZWW003 Both of them are in the same tag. Tag name is for both is <id_tag>$var1</id_tag>. I also need data from <Q_tag>$var2</Q_tag>. Strstr can't do the job. |
Lo-Fi Version | Time is now: 25th April 2024 - 12:27 PM |