Home News Reviews Forums Shop


Official K-Probe Discussion (Tool for Scanning C1C2/PIPO)

General discussion about recordable CD, DVD and BD media and write quality testing.

Postby dolphinius_rex on Tue Apr 15, 2003 7:08 pm

How about a "sectors skipped over" counter?? that way you can't miss anything! :D
Punch Cards -> Paper Tape -> Tape Drive -> 8" Floppy Diskette -> 5 1/4" Floppy Diskette -> 3 1/2" "Flippy" Diskette -> CD-R -> DVD±R -> BD-R

The Progression of Computer Media
User avatar
dolphinius_rex
CD-RW Player
 
Posts: 6923
Joined: Fri Jan 31, 2003 6:14 pm
Location: Vancouver B.C. Canada

Version 1.1.8 available

Postby MediumRare on Thu Apr 17, 2003 5:23 am

Karr Wang now has version 1.1.8 available on his web site:
http://home.pchome.com.tw/cool/cdtools/.

Release Notes

1.Fixed Bug : Start LBA is always 0 in PI/PO mode

2.C1/C2 average will ignore the read errors


I'm at work now and can't try it out until I get home in about 7 hours. :cry:

G
User avatar
MediumRare
CD-RW Translator
 
Posts: 1768
Joined: Sun Jan 19, 2003 3:08 pm
Location: ffm

Re: Version 1.1.8 available

Postby cfitz on Thu Apr 17, 2003 10:07 am

Yeah! More improvements. :D Thanks Karr (and Medium Rare, for notifying us).

2.C1/C2 average will ignore the read errors

I take it that this means that the averages are only calculated for the sections of the disc where C1/C2 errors can actually be measured, and the unreadable sectors are left out of the calculations altogether. Very good! I think that it is the right thing to do since there isn't any meaningful information about the error levels where the read errors occur.

Medium Rare wrote:I'm at work now and can't try it out until I get home in about 7 hours. :cry:

Don't you get special leave for K's Probe releases? I think you should... So much for the enlightened Europen attitude towards workers' rights :wink:

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Re: Version 1.1.8 available

Postby MediumRare on Thu Apr 17, 2003 5:08 pm

cfitz wrote:Don't you get special leave for K's Probe releases? I think you should... So much for the enlightened Europen attitude towards workers' rights :wink:

cfitz

Well actually I did play around with it a bit, but that's a sideline- I'll tell you about that on a slow Tuesday.

I finally got around to trying the new version 1.1.8. The scan of my "crappy bleached audio disk" did not change significantly, so I looked at the 40x DataLife 40x (CMC) burned @48x.

It was really interesting to watch the sums and running averages with the first scan- as long as the "skipping ..." message showed that errors were detected, these values stayed constant. The drive started reading again after ca. 70 min., and the sums climbed but the averages dropped. So the samples with errors no longer enter into the averages. Great!! :D
case 1:
Image

I repeated the tests several times and found that this first result is not reproducible. Here are 2 further runs with the same disk:
case 2:
Image
case 3:
Image
the range excluded because of errors is greatly reduced. However, the drive did not return to the maximum read speed- after the errors it only read at ca. 8x. This is not really in agreement with the design-goal of reading at maximum speed. I'm not sure if this is possible with crippled disks like this one. For normal purposes, K's Probe works very well. CD Doctor gave up on this disk at maximum read speed.

Now I tried to derive the total error time from the mean and total counts:
Code: Select all
Scan         Case 1   Case 3
C1 tot       126857   259833
C1 Ave       33.766   61.108
C1 Time        3757     4252
C2 Tot        31937    53994
C2 Ave        8.501   12.698
C2 Time        3757     4252
Disk m:s:f 79:16:43 79:16:43
Disk  (sec)  4756.6   4756.6
Diff. (sec)   999.7    504.4
Diff. (min)   16.66     8.41

Here the sampling time (lines 3 and 6) is (total count) / average. The values from C1 and C2 rates are the same, so this seems to be OK. The excluded time due to errors is in the last line. This is the difference between the disk lenght and the sampling time. For the 2 case I looked at, though, this seems to me to be too long- I don't think there 16 min. worth of errors in Case 1 or 8 min. in Case 3. Any comments on this ? Where did I make a mistake?

G
User avatar
MediumRare
CD-RW Translator
 
Posts: 1768
Joined: Sun Jan 19, 2003 3:08 pm
Location: ffm

Postby cfitz on Fri Apr 18, 2003 3:44 am

Okay Medium Rare, I think you have hit on something that either we haven't been interpreting properly or that Karr needs to adjust in his data presentation. Take the following example, which has no unreadable sectors (well, actually it has one in the area of the white boxes I added, but that is the topic for another post and doesn't materially affect what I am about to say):

Image

The extent of the scan was LBA 0 to 357379 (0x57403). At 75 sectors per second, that works out to 4765.05 seconds (79:25:05). Dividing the 20330 total C1 errors by 4765 seconds yields an average value of 4.266 errors per second. However, K's Probe lists the average as 4.976, 16.6% higher than what I calculate.

I also saved this test as a csv file. The last sample included in the file is at LBA 357379 (0x57403), exactly what K's Probe reports on the chart. Summing the individual sample values in the csv file, I arrive at a total of 20330 C1 errors. Again, this is exactly what K's Probe reports on the chart.

So why does K's Probe report a different average value for C1? The first hint of the answer can be found by noting that the csv file includes 4086 individual samples. Dividing 20330 by 4086 yields 4.976 - exactly what K's Probe reports on the chart. That explains the source of the denominator for the average Karr calculates, but not why the average disagrees with what I calculate by dividing the length of the disc in seconds by the total number of errors.

For the final part of the answer, we need to look at the average interval between samples listed in the csv file. Dividing the last LBA 357379 by 4086 samples we find that, on average, there are 87.46 sectors per sample. The normal 1x CD playback rate is 75 sectors per second. Thus, the average interval between samples within my file is 87.46 sectors / 75 sectors per second = 1.166 seconds. And that, right there, is the source of the 16.6% discrepancy in average values.

Karr is plotting and averaging the number of errors per data sample that he collects rather than the number of errors per second. If the samples were collected exactly on 1-second intervals, then errors per sample and errors per second would be equivalent. But they aren't, at least not on my machine. On my machine on this test, those samples were spaced 16.6% further apart, on average, than the 75 sectors per second of the CD-ROM specification for 1x playback. I think, Medium Rare, that if you also store and analyze a csv file like I did, you will find a similar effect that will account for the incorrect excluded error times you reported above.

Based on what I am seeing, my presumption is that data collection underneath the covers of K's Probe goes something like the following:

1. KProbe sets the desired test speed.
2. KProbe waits for drive to spin up.
3. Testing begins, and the C1/C2 error counters begin to accumulate error totals.
4. KProbe samples the counters at a rate that, given the instantaneous linear speed of the drive, approximates the equivalent of 1-second intervals for 1x reading. In other words, KProbe attempts to sample every 75 sectors.
5. KProbe records the number of errors that have accumulated since the last sample.
6. Continue with steps 4 through 5 until the end of the disc is reached.

Now, anything that slows down KProbe and prevents it from sampling at every 75th sector will increase the integration time over which errors can accumulate, and inflate the value recorded for that sample. I suspect this is why Karr is now recommending, and has made it the default setting, to not show the real-time chart while collecting measurement data. The extra work involved in drawing and scrolling the chart interferes with the ability to sample at regular intervals, and thus degrades the accuracy of the error counts.

What can be done to improve the situation? I don't think there is anything that can reasonably be done to improve KProbe's sampling interval accuracy itself to the point where it can be guaranteed to sample at exactly 1-second equivalent intervals. That would only be possible if the sampling thread was given true real-time priority and preempted all competing threads, and that isn't going to happen. Windows on the desktop wasn't designed to be a RTOS.

Instead, I would suggest post-processing the data to account for the sampling interval variability. Scale each sample's error count by the deviation of that sample's interval from 75 sectors so that errors per second, not errors per sample, are being displayed and used for calculations. Thus, if 100 errors are collected in a sample that spans 90 sectors, don't display a value of 100 on the chart, and don't use 100 for calculating the average. Instead, display and use the value of 100 errors / 90 sectors * 75 sectors per second = 83 errors per second.

Of course, the grand total of errors collected over the entire disc need not and should not be scaled. It is but a simple summation of all errors, and has no rate component by which it needs to be scaled.

Okay, that is enough for this post. Karr, does this make sense? Is it correct as I have explained it, and are my suggestions for correcting it reasonable? I know this has been a long and somewhat detailed post, so, please don't hesitate to ask for clarification.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby cfitz on Fri Apr 18, 2003 4:08 am

Okay, on to round two. This involves the mystery of "disappearing" bad sectors. Let's begin by taking a look at the same charts I showed in my last post:

Image

At first glance, this appears to be a poor quality but not fatally flawed burn (so much for the vaunted quality of Mitsui CD-R's). The C1 rates are high and the disc has C2 errors as well, but at least it doesn’t show any unreadable sectors. However, I know for a fact that K's Probe did encounter an unreadable sector while testing this disc. If we zoom in on the areas outlined in white on the above charts, the unreadable sector becomes visible (I am using green for C1 errors, yellow for C2 errors, and red for unreadable sectors):

Image

The unreadable sector also shows up, as expected, in the csv file I saved for this test. So there is no problem with K's Probe detecting the unreadable sector. The problem is with the presentation of that sector.

Apparently the graphics routines Karr is using to draw the graphs will, under some circumstances, squeeze narrow spikes out of existence when plotting a lot of data in a relatively small window. I would really like to see this corrected, since allowing an unreadable sector to sneak through unnoticed is very bad thing. Karr, how difficult would it be to fix this problem? Do you have any control over this behavior, or are you using a third-party graphics library that does all the scaling and clipping automatically without giving you enough control to guarantee that narrow spikes like this will be displayed?

I imagine that ripping out a graphics library and replacing it with a new one or reworking hand-rolled graphics routines would be a lot of work, so maybe we will have to make this a long-term request or even learn to just live with it as is. But in the meantime, perhaps you could at least add a summary count of the total number of unreadable sectors skipped, like dolphinius_rex suggested. It would be nice to have the count in general, and it would also alert us to any situations like this where the chart fails to show unreadable sectors.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby cfitz on Fri Apr 18, 2003 4:19 am

Oh, and one more thing. Then I promise I will stop harassing you. :wink: I also have noticed what Medium Rare reported when testing a disc with unreadable sectors. Many times the drive continues testing at speed when it encounters unreadable sectors. Other times it seems to slow for a moment and then spins back up before continuing the test. Both of these are in keeping with the desired goal of maintaining the specified test speed throughout the test. However, sometimes the drive slows down after hitting an unreadable sector and never speeds back up again. The disc in my previous post is one such example. I know you worked to fix this Karr, but is it possible that there are some circumstances that may still be preventing the drive from spinning back up to speed after it encounters an unreadable sector?

Like Medium Rare stated, this isn't that big a deal since it isn't reasonable to expect perfection when testing bad discs like this. Under normal circumstances K's Probe works great. But, if you do come across the cause of this and can correct it, we would appreciate it. :)

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby CDHero on Fri Apr 18, 2003 5:28 am

I think I must explain how KProbe to collect C1/C2 and to calculate the average of C1/C2.
In fact , it is impossible to change the interval of reading C1C2 errors.
Because MediaTek Chip reports C1/C2 of 75 sectors passively.
So KProbe (even CD Doctor) must trigger drive to start accumulating C1/C2 errors.
Ex: Program trigger chip start accumulating at LBA=100.And Program must get C1/C2 data at about LBA=175, if Program get C1C2 data at LBA=190 , and Program will loss 190-175=25 sectors.
So , if drive encounters error reading or slipping, KProbe maybe unable
to get C1/C2 data , thus KProbe will try to get C1/C2 data at next LBA , if it cannot make it in 75 sectors , it marks it as an error , because drive doesn't return C1/C2 data.

I think this is not a issue of KProbe .KProbe provides
more information than CD Doctor , so someone will feel puzzled.
If I turn off some functions of KProbe , I think there will be few issues.
CDHero
CD-RW Thug
 
Posts: 65
Joined: Tue Apr 08, 2003 8:46 pm

Postby cfitz on Fri Apr 18, 2003 5:56 am

Do you agree that if 20330 total errors are accumulated over 4765.05 seconds, then the average error rate should be 4.266 errors per second? This has nothing to do with unreadable sectors.

Following on with your example, if you program the trigger chip to start accumulating at LBA=100 and by LBA=175 20 errors have been accumulated, by LBA=190 3 more errors (for a total of 23) have been accumulated and by LBA=250 yet another 15 errors (for a grand total of 38 ) have been accumulated, what will the program read if gets the data at the following LBA's:

LBA=175 only
LBA=175 and 250 only
LBA=190 and 250 only
LBA=250 only

Is the following correct or incorrect?

LBA=175 only -> program reads 20
LBA=175 and 250 only -> program reads 20 and then 18
LBA=190 and 250 only -> program reads 23 and then 15
LBA=250 only -> program reads 38

If the above is wrong, how about this?

LBA=175 only -> program reads 20
LBA=175 and 250 only -> program reads 20 and then 38
LBA=190 and 250 only -> program reads 23 and then 38
LBA=250 only -> program reads 38

Or is completely different than the above? Does the drive only accumulate errors for the most recent 75 sectors at any one instant in time? Thus, if you read late at LBA=190 you will only get the error count for LBA=115 through LBA=190, and it is the errors from LBA=100 through LBA=115 that are lost? Is this what you mean by being unable to get the data and losing sectors? If so, then I guess the rates will be correct at each sample, and it must be the total that is wrong when the intervals are stretched.

karr_wang wrote:KProbe provides more information than CD Doctor , so someone will feel puzzled. If I turn off some functions of KProbe , I think there will be few issues.

No, no! Please don't do that! We are just trying to understand what all it tells us.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby dolphinius_rex on Fri Apr 18, 2003 6:11 am

cfitz: WOW! that was brillantly described! I hadn't even noticed the problem with the averaging... thanks for the info regarding the unreadable sectors being listed in the .csv file. I can at least look there if I have any questionable graphs.

Mr. Wang: This software is coming along very well! please don't be discouraged, you have all of our support! :D
Punch Cards -> Paper Tape -> Tape Drive -> 8" Floppy Diskette -> 5 1/4" Floppy Diskette -> 3 1/2" "Flippy" Diskette -> CD-R -> DVD±R -> BD-R

The Progression of Computer Media
User avatar
dolphinius_rex
CD-RW Player
 
Posts: 6923
Joined: Fri Jan 31, 2003 6:14 pm
Location: Vancouver B.C. Canada

Postby MediumRare on Fri Apr 18, 2003 8:26 am

This post has been over 4 hours in the making.
cfitz wrote:Karr is plotting and averaging the number of errors per data sample that he collects rather than the number of errors per second. If the samples were collected exactly on 1-second intervals, then errors per sample and errors per second would be equivalent. But they aren't, at least not on my machine. On my machine on this test, those samples were spaced 16.6% further apart, on average, than the 75 sectors per second of the CD-ROM specification for 1x playback. I think, Medium Rare, that if you also store and analyze a csv file like I did, you will find a similar effect that will account for the incorrect excluded error times you reported above.

I was offline preparing a post (twice in fact- have spent 4 hours on this!) with essentially the same information as in your posts cfitz, but you've said it well and in more detail.

It's a bit more complicated than an average factor, though, because the sample length increases during a scan, typically 80 sectors at the beginning and 100 or more at the end (on my drive).

The factor depends on the read speed. I ran some tests on a good disk, so the question of slip/read error doesn't arise. Here's a summary:
Code: Select all
  Rate   mean sample
   8x     76.4
  16x     78.4
  24x     79.6
  32x     81.1
  max     84.3

CD Doctor doesn't do anything else- the CSV-files show similar sampling rates, varying from 80 sectors to over 100. The maxima shown there are determined per sample too.

I think this is related to the "ticks on the time axis" problem I mentioned earlier too- it looks like a global factor "total time" / "total samples" is used to subdivide the axis according to sample no. and then the individual LBA-counts are used to label the ticks. It would be better to use the LBA-value rescaled to time and not the sample no. for the time axis- in Excel this is the difference beteeen a "line diagram" and a "point x-y" chart.
karr_wang wrote:So , if drive encounters error reading or slipping, KProbe maybe unable
to get C1/C2 data , thus KProbe will try to get C1/C2 data at next LBA , if it cannot make it in 75 sectors , it marks it as an error , because drive doesn't return C1/C2 data.

I don't understand that- this occurs with good burns too (no read errors). cfitz just posted some questions to clarify that.

In the meantime I had prepared a diagram on the assumption that the drive offers a count every 75 sectors and that this count is sampled at irregular intervals (these are actual sampling intervals taken from one scan- ca. 85 sectors). Is this the way it works?

Image

In this case, for example, the value the drive offers @600 sectors would be skipped by the program and the other samples would be noted at a later LBA (time point).

karr_wang wrote:I think this is not a issue of KProbe .KProbe provides
more information than CD Doctor , so someone will feel puzzled.
If I turn off some functions of KProbe , I think there will be few issues.

Please don't disable the options- we're just trying to understand the results and the extra information is very useful.
As I mentioned above, CD Doctor counts and averages like this too.

G
User avatar
MediumRare
CD-RW Translator
 
Posts: 1768
Joined: Sun Jan 19, 2003 3:08 pm
Location: ffm

Postby Kennyshin on Fri Apr 18, 2003 10:03 am

Over 3,000, cfitz. :D
Kennyshin
CD-RW Player
 
Posts: 1173
Joined: Tue May 14, 2002 12:56 am

Postby dodecahedron on Fri Apr 18, 2003 11:29 am

Kennyshin wrote:Over 3,000, cfitz. :D

yes!
congrats on the tri-millenium!
One Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bind them
In the land of Mordor, where the Shadows lie
-- JRRT
M.C. Escher - Reptilien
User avatar
dodecahedron
DVD Polygon
 
Posts: 6865
Joined: Sat Mar 09, 2002 12:04 am
Location: Israel

Postby MediumRare on Fri Apr 18, 2003 12:21 pm

cfitz always goes a bit further- congrats on 3 x (1K + 1 per mill) as of this writing. :D

G
User avatar
MediumRare
CD-RW Translator
 
Posts: 1768
Joined: Sun Jan 19, 2003 3:08 pm
Location: ffm

Postby cfitz on Fri Apr 18, 2003 12:23 pm

MediumRare wrote:It's a bit more complicated than an average factor, though, because the sample length increases during a scan, typically 80 sectors at the beginning and 100 or more at the end (on my drive).

Yes, I noticed a good deal of variation as well, with the rough trend being a linear increase with LBA. Here is a hand made chart of my test results, including the LBA delta between samples:

Image

Notice the linear ramp of LBA delta values and the fact that the LBA delta drops back towards 75 after the unreadable sector. This is consistent with what you report in terms of variation with linear reading speed and also reinforces my contention that the drive did not return to maximum read speed after it encountered the unreadable sector.

When I said that the samples were off by an average of 16.6%, I wasn't trying to suggest that all the samples were off by a single factor. And my initial suggestion to correct what I felt to be faulty values by scaling each sample's error value by the sample LBA expansion factor (75/90 in my example) wasn't meant to imply that a single average value would suffice. It would have to be calculated for each sample. And even that scaling method, as I described it, is a somewhat naive approach that could lead to incorrect values as well (it would artificially inflate error values for samples spaced by less than 75 sectors). To really do it right you would need to use a more sophisticated algorithm to completely resample the irregularly spaced data into regularly spaced samples.

However, having read Karr's reply, I am right now leaning towards the thought that the individual sample values are correct in that they represent the actual error rates at any instant in time, measured in errors per second or, if you prefer, errors per 75 sectors. This would mean that my scaling idea is moot.

Medium Rare wrote:CD Doctor doesn't do anything else- the CSV-files show similar sampling rates, varying from 80 sectors to over 100. The maxima shown there are determined per sample too.

Yes, I neglected to mention that here, but I did state in the CDFreaks forum that I think CD Doctor is doing the same thing. :oops: Thanks for the confirmation.

Medium Rare wrote:I think this is related to the "ticks on the time axis" problem I mentioned earlier too

Yes indeed. Again I mentioned this over at CDFreaks but neglected to do so here. :oops: You will have to forgive me - I was up late.

Medium Rare wrote:It would be better to use the LBA-value rescaled to time and not the sample no. for the time axis- in Excel this is the difference beteeen a "line diagram" and a "point x-y" chart.

Agreed, and that is an excellent analogy. By the way, I didn't notice this earlier since I didn't have time to examine the 1.1.6 and 1.1.7 releases, but have you noticed that the charts displayed in KProbe have changed from a line chart to a bar chart format? This is particularly evident when contrasting my hand made chart in this post to KProbe’s chart of the same data that I included in my previous post. It is also evident when looking at the zoomed-in view in that previous post. I guess this was done when Karr added the display of unreadable sectors in order to make the unreadable sectors show up better. There is nothing wrong that, in fact I think it was the right choice, but I just happened to notice it now.

Medium Rare wrote:In the meantime I had prepared a diagram on the assumption that the drive offers a count every 75 sectors and that this count is sampled at irregular intervals (these are actual sampling intervals taken from one scan- ca. 85 sectors). Is this the way it works?

Image

In this case, for example, the value the drive offers @600 sectors would be skipped by the program and the other samples would be noted at a later LBA (time point).

Excellent illustration! A picture truly is worth a thousand words. That is essentially the scenario that I was trying to describe in the last example of my last post. However, I was positing that perhaps the value that could be read is always up to date with the total errors from the latest 75 sectors in a rolling sum manner, rather than the sample and hold manner you have described. But I suspect that what you describe is actually more likely.

Right now I envision four possibilities for what the program reads from the drive every time it samples:

1. number of errors since last sample
2. number of errors since test started
3. number of errors in the last 75 sectors immediately preceding the instant at which the sample was taken
4. number of errors in the last full "chunk" of 75 sectors, where these "chunks" are aligned on integral 75-sector boundaries

The first three are what I described in my last post, the last is what you described. Karr, are any of these four correct?

I am currently leaning towards the fourth possibility as the most likely, in which case I was wrong to claim that the error rates that KProbe reports are incorrectly scaled. Even if KProbe can't sample the drive at 75-sector intervals exactly, these instantaneous rates would still be correct, but what would be incorrect, or at least easily misinterpreted, would be the counts of total errors. The totals wouldn't represent the total number of errors on the disc, but rather the total number of errors in the samples that were taken (which don't cover every sector of the disc).

The averages would have to be viewed in the same light - they would represent the average of the samples that were collected, but not the average over the entire disc because inevitably at least a few portions of the disc are missed by the samples. Still, this would be nothing to be excited about, because the difference in any practical case would be very small. Even though the samples don't provide complete coverage of every sector of the disc, they do provide good enough coverage to calculate a representative average. The difference wouldn't be anything like the 17% discrepancy I calculated earlier.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby cfitz on Fri Apr 18, 2003 12:28 pm

Hey Kennyshin, dodecahedron and Medium Rare, thanks for the kind words. :D I was so busy calculating error rates and averages and such that I didn't even notice I had passed the 3000 mark.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby cfitz on Fri Apr 18, 2003 1:29 pm

Oops. I just got the old "Axis Maximum Value must be >= Minimum" error on version 1.1.8. I wonder if the same bug as before slipped back in, or if this is a different manifestation?

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby rdgrimes on Fri Apr 18, 2003 1:34 pm

the old "Axis Maximum Value must be >= Minimum" error

Ditto, got it a couple times because when you zoom in on the graph, it removes the auto-axis function. Also got it a couple other times, maybe when changing the colors.
rdgrimes
CD-RW Player
 
Posts: 963
Joined: Fri Dec 20, 2002 10:27 pm
Location: New Mexico, USA

Postby MediumRare on Fri Apr 18, 2003 2:18 pm

cfitz wrote:When I said that the samples were off by an average of 16.6%, I wasn't trying to suggest that all the samples were off by a single factor.
I figured that, but sometimes you have to state the "obvious".
cfitz wrote:Yes, I neglected to mention that here, but I did state in the CDFreaks forum that I think CD Doctor is doing the same thing. :oops: Thanks for the confirmation.
...
Yes indeed. Again I mentioned this over at CDFreaks but neglected to do so here. :oops: You will have to forgive me - I was up late.

I saw that after I posted here. :D

cfitz wrote:
Medium Rare wrote:It would be better to use the LBA-value rescaled to time and not the sample no. for the time axis- in Excel this is the difference beteeen a "line diagram" and a "point x-y" chart.

Agreed, and that is an excellent analogy. By the way, I didn't notice this earlier since I didn't have time to examine the 1.1.6 and 1.1.7 releases, but have you noticed that the charts displayed in KProbe have changed from a line chart to a bar chart format?
...
There is nothing wrong that, in fact I think it was the right choice, but I just happened to notice it now.

Yeah, I noticed that right away. I prefer it too, really. One problem with bar charts (if we stick with the Excel analogy) is that they are normally implemented only for x-axes with fixed Deltas or named categories (I have a problem here- my Excel is in German and I don't know what the English version says for e.g. "Rubrik").

You've presented the various scaling options succinctly. I really appreciate the effort you put into well-formulated and well thought-out contributions. :D

G
User avatar
MediumRare
CD-RW Translator
 
Posts: 1768
Joined: Sun Jan 19, 2003 3:08 pm
Location: ffm

Postby spath on Fri Apr 18, 2003 3:03 pm

Without presuming how these chipsets work, the most common
method I've seen is the first one, just because it's the simplest
and cheapest to implement in hardware (and it avoids the sampling
problems you nicely described).
spath
Buffer Underrun
 
Posts: 33
Joined: Thu Dec 26, 2002 8:15 am

Postby cfitz on Fri Apr 18, 2003 8:07 pm

Thanks for the input spath. The first method I described is the first one I thought of, but after reading Karr's comments it seems to me that he might be trying to describe something more like method three or four. I guess we will have to wait for Karr to confirm or deny, unless you or someone else can independently confirm how the MediaTek chipsets used in LiteOn drives handle this function.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby cfitz on Fri Apr 18, 2003 8:17 pm

MediumRare wrote:One problem with bar charts (if we stick with the Excel analogy) is that they are normally implemented only for x-axes with fixed Deltas or named categories (I have a problem here- my Excel is in German and I don't know what the English version says for e.g. "Rubrik").

Yes, this is true. That's one reason I suggested that post-collection resampling to uniform intervals might be needed. It is certainly possible to draw a bar chart with non-uniform categories, but as you say isn't common. By the way, "category" is the right term for "rubrik". I presume the German "rubrik" and the English "rubric" share the same origin.

MediumRare wrote:You've presented the various scaling options succinctly. I really appreciate the effort you put into well-formulated and well thought-out contributions. :D

On the contrary, thank you for your well-formulated and well thought-out observations and contributions.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby cfitz on Sun Apr 20, 2003 10:13 am

Has anyone played with the raw commands feature? It's a neat feature, allowing you to see the raw bytes that the drive returns from the disc. Of course, to really make good use of it you need to know the format of the returned data. I imagine with some investigation we could figure out some of the fields either by finding specs or through comparison of the data to known characteristics of the disc.

For example, I did an easy one and found that in the results returned by the "Read ATIP" command, bytes 08 through 0A contain the ATIP disc identifier code in m:s:f format, while bytes 0C through 0E contain the disc capacity, also in m:s:f format.

I think the basic format of the data returned by the "Read TOC" command shouldn't be too hard to figure out either.

There are a lot of nice things in KProbe to explore.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

Postby spath on Mon Apr 21, 2003 6:38 pm

The format of the MMC commands like Read TOC, Read ATIP and so on
is fully described in the mmc specs, available for free at www.t10.org.
Also you can monitor all atapi/mmc commands and the bytes returned by
any drive with tools like bustrace (www.bustrace.com) or bushound
(www.bushound.com).

Does this 'raw commands' tool also provide the drive specific commands
like the one used to read Cx and Px counters ? If so, it could be
interesting to document those. Also if you can issue these commands with
this 'raw commands' tool, then we can figure out the accumulation method :)
spath
Buffer Underrun
 
Posts: 33
Joined: Thu Dec 26, 2002 8:15 am

Postby cfitz on Mon Apr 21, 2003 7:05 pm

Thanks for the link, spath. Looks like there is a lot of good information there.

cfitz
cfitz
CD-RW Curmudgeon
 
Posts: 4572
Joined: Sat Jul 27, 2002 10:44 am

PreviousNext

Return to Recordable Media Discussion

Who is online

Users browsing this forum: No registered users and 0 guests

All Content is Copyright (c) 2001-2024 CDRLabs Inc.