Welcome to Laser Pointer Forums - discuss green laser pointers, blue laser pointers, and all types of lasers

Buy Site Supporter Role (remove some ads) | LPF Donations

Links below open in new window

FrozenGate by Avery

Status of Cheap DIY Laser Power Meters?

I took the average of the ratios which seems to work much better, also fixed up the function a bit:
Code:
var floatingRatio:Number = 0;

function seriesSum4(a:Number, b:Number, c:Number, d:Number):Number {
   // Find Differences
   var da:Number = b - a;
   var db:Number = c - b;
   var dc:Number = d - c;

   // Find Ratio
   var ratio:Number = (db / da + dc / db) / 2.0;
   if (isNaN(ratio))
      ratio = 0;

   if (Math.abs(ratio) < 1) {
      floatingRatio = (floatingRatio + ratio) / 2.0;
   }
   trace (floatingRatio + " - " + ratio);
   return a + da / (1.0 - floatingRatio);
}
(In actionscript now...)  You have to watch for divisions by zero when calculating the ratio when the meter has just been sitting and everything is zeros.

capture1n.png


If you slow down the samples a bit it can deal with a fair bit of noise, in the worst case it basically averages without as much delay.
capture3.png

capture2g.png


The algorithm seems to be most sensitive to certain kinds of noise, mainly when the values seem to be going up except for one.  From my experience with a multimeter, that doesn't happen very often on your sensor.

You can play with it here, press simulate to start the graph and hold down the laser button to raise the temperature.
http://www.sharemation.com/691175002/laser/Untitled-1.html
 





Interesting stuff.. i've given it an initial try in the uC, but without any succes.. the thing doesnt like working with non-integer values at all - a couple of floating point calculations and i hit the code size limit (which is 2k due to the demo software i'm using - fine for 16f628, but the 877 does have 8k).

Perhaps it'll work better using a LM335 as the sensor, but that only comes in TO-92 package (or DIP8) which is unpractially large for this application.
 
Good hell, you shouldn't be using floating point datatypes on embedded platforms. Floating-point code is large and CPU-intensive and you don't need that kind of precision in this kind of application (SIXTEEN SIGNIFICANT DIGITS?!). You can usually get away with just using large integer datatypes and doing some scaling on the output to reflect the correct range.

Here, I've derived an exact formula (image attached) for calculating the final value of the voltage from two measured points. If you don't want to read through it all, the gist of it is:

V[sub]final[/sub] = V(t[sub]1[/sub]) + K*DeltaV

Where DeltaV = V(t[sub]2[/sub]) - V(t[sub]1[/sub]). K is a measured constant chosen for scaling (use a known source and choose K from that). Just make sure that all measurements are made at the exact same time intervals.

For noisy data, just average multiple values. A good simple three-point average:
Code:
average = (V[0] + V[1] + V[1] + V[2]) >> 2;

Where V[1] represents the center value, and V[0] and V[2] are left and right values respectively. Use bit-shifting to reduce computational complexity.
 

Attachments

  • capacitor_voltage.png
    capacitor_voltage.png
    42.9 KB · Views: 108
Most of the work can be done using integer calculations as it is, as the signals are all 16 bit integers (64 10 bit samples added up, which provides averaging in the process and still fits in int).

However, from the code provided (a,b,c and d are 16 bit integers):

1) var da:Number = b - a;
2) var db:Number = c - b;
3) var dc:Number = d - c;
4) ratio:Number = (db / da + dc / db) / 2.0;
5) return a + da / (1.0 - floatingRatio);

How should i proceed to do this calculation with integers? Calculations 1,2 and 3 are perfectly okay as integer results, but what about 4 and 5?
I suppose i could scale up da, db and dc a couple of bits since they are usually small compared to the 65535 range of an int.

And from the capacitor charging sheet:

19) r = 1 / e(dT/T)
20) final = V1 + (V2-V1)/(1-r)

requires knowing the time constant of the process. With a known capacitor and resistor that is not a problem, but you think i should somehow measure this for the laser/sensor system and proceed from there? I'm not convinced its even constant over the whole range...
 
My math is very similar to Bionic-Badgers with the exception that I spend every line except the last calculating r (the ratio).  He does bring up a valid point that you may get acceptable results just using a constant for r and playing around.  I'll upload a version of my flash program that lets you set r to see how it will behave, that will cut out most of the math.  I have a sneaking suspicion that r will change with the laser power though.

r represents how much greater each term is than the previous, for example an r of 0.5 means that each term will increase by 0.5 * the increase between the previous terms (2, 3, 3.5, 3.75). An r of 0.25 would be (2, 2.5, 2.625). An r of 1 would just increase forever.


You could skip all the floating point math and instead use 16 bit integers (Multiply the differences by 1024 or something before performing division to result in another integer).
 
Well the curve graph you've shown is a capacitor charging curve.  You actually don't need to know what the time constant is, but instead can choose r such that it fits within your dynamic range.  The value 1 / e(DT/tau) is the value if you needed to have the actual voltage, and had access to floating point numbers and real measured values.  You really don't, and you don't need them.  For you, r, or rather k = 1/(1-r) is just a calibration constant.

So how do you use it?

Assume you know that maximum value you'd like to represent in your meter.  Say it is 5 volts, and that corresponds to a maximum ADC output of 1023.  First provide a "known" signal to your diode/thermal sensor that will produce 5V on the ADC, and make sure that your ADC is producing 1023 as its output (avoid clipping, etc.).

Then let the system return to steady state (where it is ~0V).  Then do a calibration step.  Choose a set deltaT sample period.  Fire up your known 5V signal source, and  and measure two or more samples at those time increments before the signal is too close to 5V.

Now apply the formula, only this time you know what V0 should be (5V), and you have two measured values (V1 and V2).  From that you can derive the k = 1 / (1 - r) value.  The k is your calibration value provided you continue to use the same deltaT in the future.  The nice thing about the calculation is that you can use your measured V1 each time, so even if there is some error in calibration it slowly approaches the correct value anyway.

There are some tricks to making this more computationally efficient too.  You can choose k as a power-of-two divisor (1 / 2[sup]n[/sup]), e.g. 1/2, 1/4, 1/8, etc. Which means you can just bit-shift the value instead of doing division which can take time and throw off your sampling period (if it's small).  So for example you could use:

Code:
V0 = V1 + (V2 - V1) >> 1;

The >> is the bit-shifting operator.  The value on the right is how many bits to shift.  A ">> n" bit-shift has the effect of dividing by 2[sup]n[/sup].  The above case would have k = 1 / 2[sup]2[/sup]. It's very computationally efficient compared to real division.

For the output, you will want to scale the value to the final answer.  This can be done in integers as well.  First find out what the final output scaling will be.  Say you have a dynamic range of 1024, and 1023 corresponds to a measured 324 mW.  You'd like to have the meter read "324" when the value is at 1023.  How is that done?  Well multiply it by a rational:

Code:
int numerator = 324;
int denominator = 1024;
char output[100];
sprintf(output, "The mW is: %d", (V0 * numerator) / denominator);

This preserves precision across the operation.  There, now the value is scaled without resorting to floats.

If you still want to use floats:

Code:
char output[100];
sprintf(output, "The mW is: %.2f", V0 * (324 / 1024));
 
Oh, and if r does change with wavelength as 691175002 mentioned, you can just derive k for specific wavelengths and then switch it provided you know the wavelength. An automated way to figure out the wavelength (to some degree) is using a color sensor.
 
:-/ I doubt that sensor will work. It probably has only the elements (RGB) and thus will only react to three peak wavelengths.
 
Well there are only a few wavelengths that really need to be detected. It probably won't tell much difference between 635nm and 660nm, but there probably won't be much difference for the sensor anyway.
 
Yeah. The only really important wavelengths are just 405, 532, and 650nm anyway. Although LD's differ in wavelengths sightly from one another.
 
I have determined that r is dependent on the response of the sensor (Aka how much of the light it absorbs and how fast it heats up). It will likely stay constant as long as the ambient temperature doesn't change wildly. The only potential problem is that coatings do not absorb all wavelengths equally; however, I am guessing that it wont make a difference of more than a few % and the error will decrease as the readings get closer to the real value.
 
@Bionic-Badger: it's interesting stuff really - i should let it all sink in a bit, i'm fairly new to these things, but it sounds like it will be doable with some smart programming.

As for the sensor: it is thermal, which it what makes it slow in the first place. If it had been some optical sensor, none of this prediction work would be of any use. The time to the display reaching 99% of the final result is in the order of a minute or so, but my goal is to predict that endpoint after a (few) second(s). ADC speed is no problem, i'm taking 5 (actually 64x5) samples a second or so now, but that can be much faster if need be.

For now i'll just assume the sensor coating to be black, so no compensation for wavelength is needed. This might be off a few percent, but is not really a problem at this stage.
 
It might be interesting to see how well the predicted and real values correspond to each other.
 
Upon further consideration, likely the best option is to precalculate the ratio during the calibration step and keep it constant for all wavelengths.

After that, perform massive averaging over the predicted values. Noise when the sensor is just starting to warm up is absolutely brutal because the tiniest differences get magnified 50-100 fold depending on the final power of the laser. To increase the signal to noise ratio I suggest slowing down the sampling. 1mV of noise is huge when each sample is only 2mV apart, however 1mV of noise is less significant when each sample is increasing by 10-15mV.

I would place my bets on taking a single sample a second, using a precalculated r and averaging.

You could speed up the results by discarding the current average if the temperature has increased from 0 to >3 in a single sample or something to prevent a bunch of zeros from holding back the reading.
 
Right.. taking the samples further apart should aid in reducing noise problems - probably just a matter of thinkering with the sample rate.

The noise seems to affect the ratio mostly which gives the wild results - pegging the ratio to a constant deals with that effectively. Measuring a value for it over a calibration run could be a good solution.
 


Back
Top