Tw15t3r
0
- Joined
- Oct 2, 2008
- Messages
- 266
- Points
- 0
Hey guys,
Reading through an entire wealth of DIY projects and issues with LPMs, I was considering making my own. This made me run through lots of calculations to see what would make an accurate but low-cost LPM. My idea was to use a flat plate as a sensor, and take measurements of temperature at steady state when a laser is shone on it. But I ended up stumbling heavily in one area, and I think it may help explain the trend of undermeasurements by many people in the last year or so.
Technically speaking, as the temperature of a plate rises due to power being applied (for us, it'd be shining the laser on to it), it starts to lose power to the surroundings due to convection and radiation (air is poor in conduction). This loss of power gets greater as the temperature increases. Eventually, the loss of power equals to the power input, and hence the temperature of the plate stabilises.
Using an online calculator, I was attempting to find the heat lost by the sensor to its surroundings so I could know how to compensate for it in measurements, and I discovered that the heat lost is non-linear. E.g. If the sensor is at 100C above surrounding temp and is losing 1.27W of heat, having it at 150C above surroundings makes it lose something like 2.44W of heat. (calculations for 1C above surrounding is 8.2mW)
These figures are assuming a 3cm x 3cm surface and are the heat losses for 1 side due to convection and radiation. Another way to put it, would be that if you shone a 1.27W laser (and assuming all power is absorbed by the plate) the temperature of the plate will hit 100C. If you used 2.44W laser, the temperature would be 150C. From the results above, you can see that doubling the power will not double the temperature reading.
Now, considering that the 445nm and hence higher power of lasers (>1W) only came out at around the same time as inaccuracy reports started to appear, I think there is a correlation between their appearance and the apparent inaccuracies of the LPMs.
I suspect that because such high powers of lasers hit such a small sensor, the temperature of the target sky-rockets. As with my figures above, temperatures of >100C are needed if the sensor is to maintain thermal equilibrium with the surrounding air at >1W. Since sensors are smaller than my hypothesised 3cm x 3cm surface, they cannot dissipate as much heat (heat dissipation is proportional to square area), and I believe the temperatures could be >200C for 2W lasers easily.
TECs (or any thermopile for that matter) produce a proportional voltage for an increase in temperature, and hence has a linear response to temperature. But since power to temperature is a non-linear increase, the power to voltage output is non-linear as well.
Since the LPMs have to measure anything from 1mW to 2W++, there's a large range of temperatures it's got to deal with, and owing to a non-linear increase of heat dissipation to surroundings with temperature, this would result in more inaccuracy the higher the temperature of the sensor.
I know there are curve adjustments made to account for this issue, and in order to make these curve adjustments, you would have to use calibration lasers. It is easier to make a low power calibration laser than a high calibration one.
So there are two possibilities that I am suspecting:
1. Either, the curve adjustments and calibration of LPMs are assumed to be linear (failing to account for increased dissipation of heat to environment) and hence would result in inaccuracies at high powers, (this would apply exclusively to DIY LPMs)
Or
2. They are calibrated non-linearly (stepwise I heard with approximations for each region) but never were actually calibrated for really high powers like >2W or so and resulting in:
A) linear approximation from a certain power and beyond, and hence run into the same problem as above, or
B) use a polynomial approximation (i.e. taylor series) which are inaccurate when extrapolating, with inaccuracies being higher the more accurate the approximation within verifiable data points. (A common problem with polynomial approximation)
Considering that you have to put in a higher than proportional amount of power to raise the temperature the higher the temperature rises, this would result in a likely under-measurement at high powers.
Also, I don't know if the Laserbee a few years ago were calibrated up to 2W using 2W lasers (although, now they definitely should be), so maybe for people who have old Laserbees, there's possibility that the regions of higher power are more of an extrapolation? (This is a stab in the dark. Jerry, feel free to clarify.)
I have talked with ARGLaser regarding this, and he suggested posting it up to get more opinions.
What do you guys think? Could this explain why many LPMs are seeming to be under-reading at high powers? (non-professional ones that use TEC sensors that is)
Reading through an entire wealth of DIY projects and issues with LPMs, I was considering making my own. This made me run through lots of calculations to see what would make an accurate but low-cost LPM. My idea was to use a flat plate as a sensor, and take measurements of temperature at steady state when a laser is shone on it. But I ended up stumbling heavily in one area, and I think it may help explain the trend of undermeasurements by many people in the last year or so.
Technically speaking, as the temperature of a plate rises due to power being applied (for us, it'd be shining the laser on to it), it starts to lose power to the surroundings due to convection and radiation (air is poor in conduction). This loss of power gets greater as the temperature increases. Eventually, the loss of power equals to the power input, and hence the temperature of the plate stabilises.
Using an online calculator, I was attempting to find the heat lost by the sensor to its surroundings so I could know how to compensate for it in measurements, and I discovered that the heat lost is non-linear. E.g. If the sensor is at 100C above surrounding temp and is losing 1.27W of heat, having it at 150C above surroundings makes it lose something like 2.44W of heat. (calculations for 1C above surrounding is 8.2mW)
These figures are assuming a 3cm x 3cm surface and are the heat losses for 1 side due to convection and radiation. Another way to put it, would be that if you shone a 1.27W laser (and assuming all power is absorbed by the plate) the temperature of the plate will hit 100C. If you used 2.44W laser, the temperature would be 150C. From the results above, you can see that doubling the power will not double the temperature reading.
Now, considering that the 445nm and hence higher power of lasers (>1W) only came out at around the same time as inaccuracy reports started to appear, I think there is a correlation between their appearance and the apparent inaccuracies of the LPMs.
I suspect that because such high powers of lasers hit such a small sensor, the temperature of the target sky-rockets. As with my figures above, temperatures of >100C are needed if the sensor is to maintain thermal equilibrium with the surrounding air at >1W. Since sensors are smaller than my hypothesised 3cm x 3cm surface, they cannot dissipate as much heat (heat dissipation is proportional to square area), and I believe the temperatures could be >200C for 2W lasers easily.
TECs (or any thermopile for that matter) produce a proportional voltage for an increase in temperature, and hence has a linear response to temperature. But since power to temperature is a non-linear increase, the power to voltage output is non-linear as well.
Since the LPMs have to measure anything from 1mW to 2W++, there's a large range of temperatures it's got to deal with, and owing to a non-linear increase of heat dissipation to surroundings with temperature, this would result in more inaccuracy the higher the temperature of the sensor.
I know there are curve adjustments made to account for this issue, and in order to make these curve adjustments, you would have to use calibration lasers. It is easier to make a low power calibration laser than a high calibration one.
So there are two possibilities that I am suspecting:
1. Either, the curve adjustments and calibration of LPMs are assumed to be linear (failing to account for increased dissipation of heat to environment) and hence would result in inaccuracies at high powers, (this would apply exclusively to DIY LPMs)
Or
2. They are calibrated non-linearly (stepwise I heard with approximations for each region) but never were actually calibrated for really high powers like >2W or so and resulting in:
A) linear approximation from a certain power and beyond, and hence run into the same problem as above, or
B) use a polynomial approximation (i.e. taylor series) which are inaccurate when extrapolating, with inaccuracies being higher the more accurate the approximation within verifiable data points. (A common problem with polynomial approximation)
Considering that you have to put in a higher than proportional amount of power to raise the temperature the higher the temperature rises, this would result in a likely under-measurement at high powers.
Also, I don't know if the Laserbee a few years ago were calibrated up to 2W using 2W lasers (although, now they definitely should be), so maybe for people who have old Laserbees, there's possibility that the regions of higher power are more of an extrapolation? (This is a stab in the dark. Jerry, feel free to clarify.)
I have talked with ARGLaser regarding this, and he suggested posting it up to get more opinions.
What do you guys think? Could this explain why many LPMs are seeming to be under-reading at high powers? (non-professional ones that use TEC sensors that is)