Welcome to Laser Pointer Forums - discuss green laser pointers, blue laser pointers, and all types of lasers

LPF Donation via Stripe | LPF Donation - Other Methods

Links below open in new window

ArcticMyst Security by Avery

Power Losses in Thermal Measurements of Lasers (Theoretical)

Joined
Oct 2, 2008
Messages
266
Points
0
Hey guys,

Reading through an entire wealth of DIY projects and issues with LPMs, I was considering making my own. This made me run through lots of calculations to see what would make an accurate but low-cost LPM. My idea was to use a flat plate as a sensor, and take measurements of temperature at steady state when a laser is shone on it. But I ended up stumbling heavily in one area, and I think it may help explain the trend of undermeasurements by many people in the last year or so.

Technically speaking, as the temperature of a plate rises due to power being applied (for us, it'd be shining the laser on to it), it starts to lose power to the surroundings due to convection and radiation (air is poor in conduction). This loss of power gets greater as the temperature increases. Eventually, the loss of power equals to the power input, and hence the temperature of the plate stabilises.

Using an online calculator, I was attempting to find the heat lost by the sensor to its surroundings so I could know how to compensate for it in measurements, and I discovered that the heat lost is non-linear. E.g. If the sensor is at 100C above surrounding temp and is losing 1.27W of heat, having it at 150C above surroundings makes it lose something like 2.44W of heat. (calculations for 1C above surrounding is 8.2mW)
These figures are assuming a 3cm x 3cm surface and are the heat losses for 1 side due to convection and radiation. Another way to put it, would be that if you shone a 1.27W laser (and assuming all power is absorbed by the plate) the temperature of the plate will hit 100C. If you used 2.44W laser, the temperature would be 150C. From the results above, you can see that doubling the power will not double the temperature reading.

Now, considering that the 445nm and hence higher power of lasers (>1W) only came out at around the same time as inaccuracy reports started to appear, I think there is a correlation between their appearance and the apparent inaccuracies of the LPMs.

I suspect that because such high powers of lasers hit such a small sensor, the temperature of the target sky-rockets. As with my figures above, temperatures of >100C are needed if the sensor is to maintain thermal equilibrium with the surrounding air at >1W. Since sensors are smaller than my hypothesised 3cm x 3cm surface, they cannot dissipate as much heat (heat dissipation is proportional to square area), and I believe the temperatures could be >200C for 2W lasers easily.

TECs (or any thermopile for that matter) produce a proportional voltage for an increase in temperature, and hence has a linear response to temperature. But since power to temperature is a non-linear increase, the power to voltage output is non-linear as well.

Since the LPMs have to measure anything from 1mW to 2W++, there's a large range of temperatures it's got to deal with, and owing to a non-linear increase of heat dissipation to surroundings with temperature, this would result in more inaccuracy the higher the temperature of the sensor.

I know there are curve adjustments made to account for this issue, and in order to make these curve adjustments, you would have to use calibration lasers. It is easier to make a low power calibration laser than a high calibration one.

So there are two possibilities that I am suspecting:

1. Either, the curve adjustments and calibration of LPMs are assumed to be linear (failing to account for increased dissipation of heat to environment) and hence would result in inaccuracies at high powers, (this would apply exclusively to DIY LPMs)

Or

2. They are calibrated non-linearly (stepwise I heard with approximations for each region) but never were actually calibrated for really high powers like >2W or so and resulting in:
A) linear approximation from a certain power and beyond, and hence run into the same problem as above, or
B) use a polynomial approximation (i.e. taylor series) which are inaccurate when extrapolating, with inaccuracies being higher the more accurate the approximation within verifiable data points. (A common problem with polynomial approximation)

Considering that you have to put in a higher than proportional amount of power to raise the temperature the higher the temperature rises, this would result in a likely under-measurement at high powers.

Also, I don't know if the Laserbee a few years ago were calibrated up to 2W using 2W lasers (although, now they definitely should be), so maybe for people who have old Laserbees, there's possibility that the regions of higher power are more of an extrapolation? (This is a stab in the dark. Jerry, feel free to clarify.)

I have talked with ARGLaser regarding this, and he suggested posting it up to get more opinions.

What do you guys think? Could this explain why many LPMs are seeming to be under-reading at high powers? (non-professional ones that use TEC sensors that is)
 
  • Like
Reactions: ARG





Trevor

0
Joined
Jul 17, 2009
Messages
4,386
Points
113
Also, I don't know if the Laserbee a few years ago were calibrated up to 2W using 2W lasers (although, now they definitely should be), so maybe for people who have old Laserbees, there's possibility that the regions of higher power are more of an extrapolation? (This is a stab in the dark. Jerry, feel free to clarify.)

I can shed some light on this issue, but I'm a bit too tired to give your entire post a meaningful response right now.

LaserBee I and II's allow calibration between something like 400mW and 600mW (I forget the exact limitations in the stock firmware). The LaserBee I that I used to develop Ellipsis was calibrated at 500mW. I suspect the LaserBee II I recently purchased also is, but I have yet to check.

The LaserBee 2.5W USB I have also shows that it was calibrated at 500mW.

So, the LaserBee I was calibrated at 50% of its maximum input power, the LaserBee 2.5W USB was calibrated at 20% of its maximum input power, and the LaserBee II was likely calibrated at 12-18% of its maximum power input.

With regards to response curve adjustment, I've only done an in-depth analysis of the LaserBee I. It uses four separate adjustments to the nonlinear voltage output of the TEC so that a linear output is achieved. This is why knees are present in the response, as noted here:

http://laserpointerforums.com/f70/e...ild-notes-media-more-75723-2.html#post1129284

Hope that helps!

Trevor
 
Last edited:

ARG

0
Joined
Feb 27, 2011
Messages
6,772
Points
113
The curve adjustments of LPM's aren't assumed to be linear by most. In low power LPM's it's not much of a problem (Radiant Alpha 2W, Laserbee A 2W) but it's exponential so it causes a problem at the higher powers. That's why you don't see many analog LPM's over 2W, so that curve adjustment can be done.

I curve adjust my meters using a 3000 point array, so there's no inaccurate steps when graphing lasers. I've tried using polynomials, but with limited success. With my LPMs that go to 4W multiple fifth and forth order adjustments would have to be made. I am working on a full tutorial on how to do a proper curve adjustment as part of the ARGMeter project.

What Jerry/LaserBee does for his meters is a 3 step curve adjustment. This method is rather sloppy as it leaves large steps that make a laser look unstable when graphing it.
BAoek.png

What is also likely assumed by LaserBee is that the curve adjustment of each TEC is the same.
The calibration on each TEC varies and the curve adjustment does too, not individually curve adjusting TEC LPM's is just poor calibration.

I've tested lots of TEC's in my search for the perfect one, some are closer to linear than others, and some are awfully curved, but I have never encountered two of the same model TEC's that have the same curve adjustment.
 
Last edited:
Joined
Oct 2, 2008
Messages
266
Points
0
Hmm, so what I gather from your responses is that we are actually hitting the limit on "easy laser power measurement" because we're going into higher powers now and hence require more complicated adjustments to take into consideration the non-linear response of power to temperature. Kind of good news I guess, time for a technology leap!

Ok, so it's does sound like my hypothesis has a good chance of being correct. I have a proposal (which I hope to carry out myself during this summer). Since there's an issue of linearity with measurements at high powers, we could change the method of measurement entirely.

1. Get a metal piece/block/sheet of small size (just large enough for a defocused laser dot)
2. Heat it up using 1W or 2W resistors using a constant voltage/current source (this will apply a known power to it.
3. Shine the laser to be measured on to the metal, and using a feedback sensor (either a temperature sensor or thermistor), lower the power given until it reaches the same temperature as before.
4. Hence you'll know the amount of power your laser applies. I.e. the difference between initial power supplied by just the resistors and the now measured power supplied by the resistors.

Good idea?

The factors that will make this accurate will be how good your DMM is at measuring current/voltage and resistance (to know the power applied by resistor heating), and the surface area of the metal (lower surface areas mean higher changes in temperatures and hence less error in measurement). For response time, the lighter the metal piece, the faster the response time.
 

ARG

0
Joined
Feb 27, 2011
Messages
6,772
Points
113
That sounds like an interesting idea. I would definitely like to see the results from that test.
A piece of ceramic from a TEC would be best since that's what is being used.

I would love to see more developments in curve adjustment.
 




Top