As I discussed in this past post about MIDIJets, I was attempting to make a platform for surveying microjet actuator location and parameters in aerodynamic flows for my PhD research. But I think this is something that can be quite useful in many other contexts. After working with this for a couple months now and realizing how robust the driver I developed was (yes, I’m proud!), I decided to release this project as an open-source hardware. Maybe someone else might find this useful?!
With that said, the project files can be found at this GitHub page: https://github.com/3dfernando/Jexel-Driver . The files should be sufficient for you to both build your own board, program it with a PICKit4 (I’m pretty sure you should be fine with older PICKit versions) and communicate with the Serial port through a USB connection.
What can I do with it?
Now, let’s talk about the device’s uses. Being able to control many solenoids with a single board can be very useful. In my case, the application is aerodynamic research. We can activate or energize a boundary layer of a flow. But maybe the applications could transcend aerodynamic research? Imagine a haptic feedback glove that makes vibrating air jets on your fingers, how cool would that be? Or maybe an object manipulator by controlling where air is issuing from? I think there’s some other possibilities to be explored. If you would like to replicate this, let me know.
Visualizing the jets
Here’s some quick flow-vis showing the pulsating jets with a small phase delay of 60º. Just as a reminder, visualizing jets of 0.4mm diameter is not easy – so I apologize if the video looks noisy! There’s a dust particle floating in the air in some frames. That’s kinda distracting but is not part of the experiment!
Well, I’m a mechanical engineer, so board design is not really something I do professionally. Therefore, expect some issues or general weirdnesses with my design. If you’d like to replicate this, I used a Matrix Pneumatix DCX321.1E3.C224 solenoid. It is not a large valve. The right connector is on the project BOM. The issue is that this valve is a high voltage, low current valve (24V, 50mA). The driver shield I designed has those specs in mind. This means a different driver circuit would probably be needed for valves with different specs. Also, for higher currents, be mindful that the motherboard carries the current through it, possibly generating some noise if the driver current is too high (yes, I was not very smart in the board design!).
Well, I hope you found this mildly interesting. If you think you could use this project and you made something cool inspired by this, I would be pleased to know!
I, and also an increasingly larger population of the world, have concerned myself and dedicated countless hours to deliberating about the unfortunate fact of life that as we grow older, we eventually might not be able to provide for ourselves due to the natural degradation of our bodies. The capitalist society, however, provides us with the choice of converting our human capital (i.e., ability to work) into assets (money, stock, real estate) that can be used in the future to keep us going even when we lose the ability to work.
Therefore, it is important (and heavily underappreciated) to put aside a portion of your hard-earned capital for when those hard times come. Apparently, however, human psychology does not align very well with this rational argument. We naturally find ourselves jeopardizing these long-term goals by enjoying ourselves too much when we’re young and active, to the point of entering debt to buy the latest gadget.
You see, the consumerist culture of capitalism and the necessity for saving for the future are not mutually-exclusive behaviors and a reasonably intelligent and disciplined person should be able to consume goods and promote the advancement of society through the fostering of competition and the funding of technological research that is one of the greatest achievements of consumerist capitalism. It is a matter of fact, though, that this consumerist nature tempts the less rational part of ourselves to all sorts of dubious behaviors from the financial planning perspective. Thus, many different countries institutionalized retirement savings as mandatory through social security. The optimality of this solution is questionable, but under the terms of a “greater good” goal function it definitely is a sensible decision.
The collection of such a large pool of retirement capital under the management of a single countrywide institution has benefits and caveats that are rather important to be discussed. High levels of money management specialization should be expected from such an institution, given that the best in their field can be afforded to manage such a large asset. On the other hand, effectiveness of highly specialized money managers working with institutional capital against buying and holding the market has been demonstrated not to be corroborated by the data [as very well explained by Benjamin Felix, references in the video]. It is also reasonable that dilution of many behavioral and idiosyncratic market risks is possible with a larger pool of capital, averaging out the effects of spurious market movements. In third-world countries like the one I come from [Brazil], however, there is lesser trust in the effectiveness of the management of these funds as the transparency of social security data is low and a lot of room is available for dishonest behavior. I personally see social security in such environments as another “tax”, which does pay back in the long term but is prone to mismanagement and corruption.
Don’t retire early
With my stance in the argument set, I believe that regardless of social security one should save personal funds for discretionary retirement. You see, differently from people of the FIRE movement [Financial Independence, Retire Early], I believe careful selection of your professional career path during your 20’s should be enough to provide sufficient personal satisfaction from your job such that you wouldn’t need – or want – to retire early. If one’s job is fulfilling and provides them with a sense of contribution to society, why would they trade it off for “enjoying life” by doing absolutely nothing useful? Obviously, enjoying vacation trips now and then is important for a healthy life balance, but I’d say that would become boring rather fast if that was the only thing you did for a couple decades.
Granted you chose a fulfilling career, it is sensible to keep contributing to society for as long as we physically and mentally can. If you did not, consider changing while you can. Even if it is financially less rewarding, in the long run you’ll keep it up for much longer. And the fact that you enjoy what you do usually makes you willing to spend the time to do the “extra mile”, which is key to becoming respected in your area.
Nobel laureate Eugene Fama showed through his research the evidence that, in an efficient market, actively managing your money gives you no statistical edge against an investor that simply buys and holds the market. Fama also shows that there are some specific factors that have reasonable theoretical foundations and explain the gains of the market as a whole. Their description of the “Three-factor model” shows that regression fitting of historical stock pricing data can explain the performance of asset portfolios by three factors: The market factor would be a “premium” for investing in the higher-risk market; The size factor would be a premium for investing in higher-risk smaller capitalization stocks; and the value factor would be a premium for investing in companies that possess a higher book-to-market ratio. I confess I don’t fully understand the theoretical justification for increased average returns for the higher risk stocks. One thing that I find reasonable, in my personal ignorance of the financial market, is that the market factor justifies itself as long as we have large positive macroeconomic movements (i.e., as long as population grows, total amount of goods produced increases through more technology, etc.). I think it is a rather important limitation of Fama’s model that we need these macroeconomic movements to occur in order to have our stocks grow long term, and that major macroeconomic downturns are not impossible in the future if catastrophic events occur. Due to the unlikelihood of these events and the hopelessness of safeguarding from them even if they do occur, I still believe investing is a reasonable strategy.
Fama’s research sparked the composition of potentially one of the greatest tools for financial investing: Index funds. Though index funds have been around for several decades now, index plus other factor-related funds have popped into existence with incredibly small fees and the liquidity of a stock through exchange-trading. This allows small, individual investors, to decide for themselves their investment strategy and their risk tolerance in a DIY approach. If you’ve seen any other post from this blog, you know I love DIY!
the largest casino in the world
When I was 18 years old I had my first time experience with the stock market. After playing with it for a few months, I concluded – in my naive view of the world of then – that the stock market is just institutionalized gambling. The emotions you feel when your money fluctuates in the market are rather bewildering and I honestly experienced some real adrenaline pump while binge-watching my long positions fluctuating with the Market’s tide. It all seemed random, though. I tried looking for patterns, learned technical analysis and applied it as a guide to my investments. But after getting deeply acquainted with it, I felt like I was just finding patterns in randomness as we do when we see faces in clouds or stick figures made from stars in the sky. These patterns appeared to have the same predictive power of flipping a quarter. After that experience, I decided that I would not touch stocks ever again in my life.
Academic research really helped me to have a more sober view of the market. The outreach work by Ben Felix also helped me see through the bullshit of financial channels and blogs in the internet. I felt, after what was pretty much a decade, more prepared to give it another shot. The knowledge of statistics, scientific bias, data analysis and just plain critical thinking developed through higher education were instrumental in the establishment of my current, totally non-expert opinion of the financial market. So I decided to write this and share some of my humble data analysis results in the hopes that other people might find it “dumbed down” enough to give it a go. I still confess that some of the papers by Fama and French are still over my head due to sheer academic jargon and encoding.
As I glanced before, it is worth the exercise to ask ourselves why in the first place it makes sense to invest in the market. Why does the stock market seem to grow ever higher in value? Where is the wealth being generated? Is the market a zero-sum game? If so, who is losing money?
These questions still linger in my head, to be honest. We need to address what is a zero sum game, I think, to get started. A zero-sum game is just description for some systems where the total amount of a token is conserved such that only transfers of that token are possible between the players. This means, no “token” is being created out of nowhere. All games in casinos are zero-sum, for example. They involve the players putting their money in a pot, and the results of the game determine how much of that pot is distributed to the winners/losers. Usually in a casino, the game is such that the “house” has a slight statistical edge and will, over thousands of rounds, accumulate wealth. Since the game is zero-sum, that wealth must come from the players of the game. We have then, a very good distinction between “investing” and “gambling”. While both endeavors are risky and statistical in nature, “gambling” is a zero-sum game. “Investing”, on the other hand, is a positive-sum game.
But how is this even possible? How can one create money out of thin air? Well, surely the Federal Reserve in the US (and their equivalent in other countries) do, right? That makes the game positive-sum because now money has been created out of thin air. Well, not really. Though the total numerical amount of money might be larger due to “materialization” of money, no actual wealth was created by doing so.
This brings us to an important point in investing. What does money mean? What is the nature of wealth? Well, I don’t pretend to know the answer of these questions. My readings lead me to believe that money is a token that is institutionalized in our governments through thousands of years of iterations. It seems to be a natural manifestation of society. Instead of trading goods directly, we use the money token as a convenience. It by itself only stores value because everyone agrees it has value. Without diverting too much on why money has value, one can meditate that a way one can earn money, and therefore generate value, is through work. Careful application of one’s time and expertise to transform raw materials into more useful devices, goods or other consumables is a reasonable means of earning money. Let’s take the example of a material good, say, a chair. A chair stores value within itself because it is a useful device that allows humans to comfortably sit while they’re doing less involved activities or just enjoying themselves. It retains its value over time, because it keeps on accomplishing that task for a relatively long period until it finally decays to the point of becoming undesirable.
In the case of the chair, the people involved in the process of harvesting the naturally-ocurring materials to build it, cutting them into shapes that embody the function of the chair, and finally putting it together, need to be compensated for their time in doing so. Furthermore, the people involved in auxiliary services such as delivery, selling, handling, managing and others also will have spent a small fraction of their time in the particular chair you’re sitting while reading this, for which they also need to be compensated. Their time, therefore, is stored in the value of the chair. And you, when making its purchase, is willing to pay your earned money to have it. Of course, your function in society also does produce tangible or intangible goods in some sense, and your time is compensated such that you can afford to pay for it.
Through this reasoning I believe we can establish that goods and services store value and the production of such goods and services is how wealth (and therefore, money) is created. Some goods will last for longer, thus storing wealth for a longer time. Others, however, will last for very little time before spoiling (i.e., foods) or destroying themselves, thereby retaining their value for less time. This means that wealth is also destroyed over time, and in order to have a net positive wealth generation people need to be producing more value than the value that is naturally destroyed over time. I would say that a key requirement for this to happen is that populations keep on growing, because that would increase the overall demand for goods and services.
My current understanding based on this argument is that money is just an agreed-upon representation of people’s productive time. This representation is also useful to quantify the impact of one’s relative productiveness, since some people earn more money for the same amount of time invested in contributing for society. I’m not claiming that this is a fair representation, but the dynamics of market offer/demand should to at least some extent dictate the relative usefulness of people’s contributions. The efficiency of the job market is a point that I myself haven’t researched too much into, however. But in a sense, this is why it is somewhat accepted that there is some positive correlation between individual wealth production and their relative contribution to society (i.e., the dichotomy between highly regarded jobs such as doctors, engineers, etc.; versus lower-waged jobs like the exploited workers of fast food restaurants and supermarkets). But I think this is a highly controversial topic to be discussed here, because I don’t believe that people deliberately want to be useless in society.
So, HOW MUCH IS the errorbar?
Ok, this was a lot of meditation about capitalism. For personal financial decision-making, I’m sure none of that is necessary. What I really wanted to share, though, is my underwhelming observations of historical data. You see, if one believes index fund investing is a viable alternative for not only keeping their money value but also increasing it over time, then the evidence should point out to a mean effective growth of value over time, net of inflation effects, right? Well, though that has already been proven from numerous papers, I wanted to also give it a go. So let’s take the historical S&P500 index data as a benchmark for data analysis. However, the S&P500 index does not account for inflation. So the first step is to remove inflationary effects. If we do that, we get the following chart:
Interestingly, the chart indicates about 6 times growth in the index over the course of 90 years. As of the time of this writing, the US markets are regarded to be in a “bull run”, which obviously needs to be taken into account. But I’d say that everyone agrees that, on average, there is indeed an overall trend of growth even after inflation correction. For comparison, the first data point of the series in December of 1927 shows an index value of 17.66 before correction and 262.3 after corrected to 2019 money.
So there’s a mean growth. But when we are buying into stock, we generally do not know where exactly we are sitting in this curve. Maybe now we’re at a peak? Maybe not, maybe we are still on the rise and the next crash will be way past 4000 points. The point is, we don’t know a priori and we can’t know. Especially for us peasants that are not involved in finance, it is a waste of our valuable human asset and skill in other fields of knowledge to attempt to predict that. The practical question for non-specialists is more about whether, statistically speaking, there is an expected return (which seems to be the case from the figure above) but also, what is the amplitude of the other outcomes (i.e., good and bad). This is what I mean by putting an errorbar in your money. Every time you look at the stock market, the nominal value of your holdings is volatile – there’s some fluctuation, or noise, to it. The question I want to personally answer by analyzing the data here is, how much is that noise?
It is reasonable to expect this noise would be changing over time. Fluctuations on the daily basis should be small, but larger excursions should be expected over time, both for bull runs and bear runs. So this question only makes sense to me under a specified time horizon. We can analyze the historical data with different time horizons, then. If we look at, say, a week time horizon, we can look at any arbitrary pair of dates 7 days apart and see the return % distribution. Then we look at the return time series and average the returns, to get a mean return over a week. We can also look at statistical properties like standard deviation and percentile values, which would give us the size of that “errorbar”. So I’ve done that. The results over overlapping periods between 7 days and 40 years look like this:
Interesting observations can be made over the long term with the chart above: A mean trend of positive returns is expected over the course of 40 years. You should expect to triple your money (200% return), inflation corrected, over 40 years. Not as much as I hoped for, to be honest, but also not that bad. It gives, to me, a very good sense of how much saving money now will be worth when I retire. This also gives grounds for decision making, which is awesome!
Furthermore, observe in the chart above that in about 30 years some positive return is not only expected, but 99% guaranteed. It takes that long of a wait. This gives a good sense of the investment horizons we are talking about here.
Unfortunately a logarithmic scale can only be used in the time axis, as negative returns cannot be plotted in logarithmic scales. Therefore, the returns over periods less than an year are rather difficult to observe. So the chart below shows a zoomed version of the data, from 7 days to 1 year. We can see the growth of the “errorbar size”. Within a week, the standard deviation is 2.8% and the 1-99 percentile encompasses returns between -8.2% and +7.5%. In a month, the standard deviation grows to 5.8% and the 1-99 percentile now encompasses returns between -16% and +13%. Within an year, the 1-99 percentile grows to between -43% and +55%. Even though the mean of the returns is always positive (over an year it’s +4.54%) it is interesting to see that the distribution has a slightly larger negative tail. This shows how emotional responses tend to affect negative movements of the market in the short term more strongly.
Another interesting observation I made with this data is displayed in the animated GIF below. It is interesting to see how the probability distribution is pretty much a normal distribution for periods less than 4 months, losing its character as the periods grow longer. For an year the distribution is more triangle-shaped, and for over 3 years it starts to morph into a long positive tail. The fact that the distribution looks exactly like a random normal distribution for the short period (i.e., less than a quarter) demonstrates the amount of time that companies need to realize gains. It also delineates the change between stock market gambling over the short term versus actual generation of wealth over the long run.
This personal analysis for me is very captivating evidence that the stock market is a positive-sum game. I know this is limited to the U.S. market and that the political hegemony of the U.S. is probably biasing the results to a positive conclusion, which might not be true in the long run. Nevertheless, I believe for my short little lifespan it might still be of somewhat valid, empirical application. The statistical distribution of gains makes me more resilient to market downturns now, since I know what kinds of movements to expect in the short term, which unfortunately are rather large.
As a matter of practical mnemonic, I’d say 2 standard deviations is enough to capture the expected movement of the short-term market. This would mean weekly movements are expected to fall within about ±5%, monthly movements should be within ±10%, quarterly movements about ±20% and yearly movements about ±40%. It’s a large errorbar to put in your money, but one that has to be done if any positive expected return, inflation corrected, is desired.
I hope you also got inspired to look at the data yourself. If that’s the case, have a look at my code in Github. It is a simple code and I’ve done some simplifications for the sake of analysis. Nevertheless I think the conclusions are quite valid. Hope you’ve learned something!
As a disclaimer, I’m applying this technique in a scientific setting, but I’m sure the same exact problem arises when doing general macro photography. So, first, what is a Scheimpff…. plug?
Scheimpflug is actually the last name of Theodor Scheimpflug, who apparently described (not for the first time) a method for perspective correction in aerial photographs. This method is apparently called by many as “the Scheimpflug principle”, and is a fundamental tool in professional photography to adjust the tilt of the focal plane with respect to the camera sensor plane. It is especially critical in applications where the depth of field is very shallow, such as in macro photography.
As an experimental aerodynamicist, I like to think of myself as a professional photographer (and in many instances we are actually more well-equipped than most professional photographers in regards to technique, refinement and equipment, I reckon). One of the most obnoxious challenges that occurs time and again in wind tunnel photography is the adjustment of the Scheimpflug adapter, which is the theme of this article. God, it is a pain.
What is in focus?
First let’s start to define what is “being in focus”. It is not very straightforward because it involves a “fudge factor”, called the “circle of confusion”. The gif below, generated with the online web app “Ray Optics Simulator“, shows how this concept works. Imagine that the point source in the center of the image is the sharpest thing you can see in the field of view. It could be anything: The edge of the text written in a paper, the contrast between a leaf edge against the background in a tree, the edge of a hair, or in the case of experimental fluid dynamics, the image of a fog particle in the flow field. No matter what it is, it represents a point-like source of light and technically any object in the scene could be represented as a dense collection of point light sources.
If the lens (double arrows in the figure below) is ideal and its axis is mounted perpendicular to the camera sensor, the image of the point source will try to converge to a single point. If the point source and the lens are in the perfect distances to each other (following the lens equation), the size of the point is going to be as infinitesimal as the source, and the point image in the sensor will be mathematically sharp.
However, nothing is perfect in reality, which means we have to accept that the lens equation might not be perfectly satisfied for all the points in the subject, as that can only happen for an infinitesimally thin plane in the subject side. In the case the lens equation is not satisfied (i.e., as the dot moves in the subject side as shown in the animated gif), the image of the point source will look like a miniature image of the lens in the camera sensor plane. If the lens is a circle, then the image will look like a circle. This circle is the circle of confusion, i.e., the perfect point in the object side is “confused” by a circle in the image side.
The Aperture Effect
The presence of an aperture between the lens and the camera sensor change things a bit. The aperture cuts the light coming from the lens, effectively reducing the size of the circle of confusion. The animation below shows the circle of confusion being reduced in size when the aperture is closed. This allows the photographer to perform a trade off: If the circle of confusion is smaller, the image is acceptably sharp for a larger depth, increasing the depth of focus. But if the light is being cut off, then light is being lost and the image becomes darker, requiring more exposure or a more sensitive sensor. The markings on the side of the lens for different aperture openings (f/3.3, f/5, etc.) indicate the equivalent, or “effective” lens f-number used after the aperture was applied. Since the lens focal length cannot be changed, the equivalent lens is always smaller in diameter and therefore gathers less light. The shape of the “circle of confusion” usually also changes when using an aperture, as most irises are n-gons instead of circles. This effect is called “bokeh” and can be used in artistic photography.
Effect of the aperture on the circle of confusion.
Focusing on a Plane
Hopefully all of this makes more sense now. Now let’s make our example more complex and make two point sources, representing a line (or a plane) that we want to be in focus. We’ll start with the plane in focus, which means both points are at the same distance to the lens. Tilting the plane will make the circle of confusion of the plane edges grow (in the gif below, tilting the plane is represented by moving one of the points back and forth). This will result in a sharp edge on one side of the plane and a blurry edge on the other side.
The effect you get is usually seen in practice as the gradual blurring, as for example the image below shows. It becomes blurry because the circle of confusion is growing, but how much can it grow before we notice it? It depends how we define “noticing”. An “ultimate” reference size for the circle of confusion is the pixel size of the camera sensor. For example, the Nikon D5 (a mid-high level professional camera) has a pixel of around 6.45μm size. Cameras used in aerodynamics have pixels on that order (for example, a LaVision sCMOS camera has a 5.5μm pixel as of 2019). High speed cameras such as the Phantom v2012 will have much larger pixels (28μm) for enhanced light sensitivity. It makes sense to use the pixel size because that’s the sharpest the camera will detect. But in practice, unless you print in large format or you digitally zoom into the picture, it is very common to accept multiple pixels as the circle of confusion. With low-end commercial lenses, the effects of chromatic aberration far supersede the focus effect at the pixel level anyways. But bear in mind that if that is the case, your 35Mpx image might really be worth only 5Mpx or so. It is also generally undesirable to have only part of the image “Mathematically sharp” in a PIV experiment, since peak locking would happen only at a stripe of the image.
The Scheimpflug Principle
Well, this is the theory of sharpness, but how does the Scheimpflug principle help? Well, the next animation below attempts to show that. If you tilt the lens, the circles of confusion slowly grow to the same size, which means there would be a focal plane where they are the same exact size. I “cheated” a bit by changing the camera sensor size in the end, but in practice it is the camera that would be moving, not the object plane. This demo hopefully shows that there is a possible lens tilt angle that will bring everything in focus.
The Hinge Rule
Though I think much deeper explanations are available on the Internet (like on Wikipedia), I personally found that playing with the optical simulation makes more sense intuitively. Now we can try to understand what the Scheimpflug Hinge Rule is all about from a geometrical optics perspective.
The animation below defines two physical planes: The Lens Plane [LP], where the (thin) lens line lies; and the Sensor Plane [SP], where the camera sensor is placed. These planes, if the lens is tilted, will meet at a line (or a point, in the figure). This is the “hinge line”. The hinge line is important because it defines where the Focus Plane [FP] is guaranteed to go through. The hinge rule, however, would still be underdefined with only these planes.
The third reference line needed is defined by the Plane Parallel to Sensor at Lens Center [PSLC] and the Lens Front Focal Plane [LFFP]. The two lines are guaranteed to be parallel, and they define a plane – the Focus Plane [FP], where the point light sources are guaranteed to be in focus. A full proof of the Hinge Rule is readily available in Wikipedia and is not absolutely straightforward, so for our purposes it suffices to say that it works.
Lens Hinge vs Scheimpflug Hinge
Another confusing concept when setting up a Scheimpflug system is the fact that the Scheimpflug adaptor also usually possesses a hinge where it swivels about. That hinge line (the Lens Hinge) is not to be confused with the Scheimpflug Principle Hinge, explained before. But it does interfere when setting up a camera system because the Lens Hinge is the axis the lens is actually pivoting about, so it ends up changing the focal plane angle, where the camera is looking at, as well as the actual location of the focal plane. So I set up this little interactive Flash simulation here that determines the location of the plane of focus and allows you to understand the swivel movements I’m talking about. Here’s the link: http://www.fastswf.com/bHISKZA. There’s a little jitter for Scheimpflug angles close to zero due to “loss of significance” when performing the calculations, but it should be understandable.
Since most browsers aren’t very fond of letting Flash code run, you can also see a video of me focusing on an object plane (blue) below. In the animation, the camera/lens assembly swivels around the CH (Camera Hinge) axis and the lens swivels around the LH (Lens Hinge) axis. The Scheimflug Hinge (SH) is only used when performing the focusing movement of the camera. The focus optimization algorithm, however, is somewhat straightforward for a 2D (1 degree of freedom – 1 DOF) setup:
Look at the object plane: Swivel the camera hinge CH until the camera looks at the object.
Adjust lens focus: Turn the lens focus ring (effectively moving the lens back and forth) until at least some of the object is in focus.
Change the Scheimpflug adaptor: Increase/decrease the Scheimpflug angle by some (arbitrary) value. This will make the camera look away from the object plane.
Repeat the three steps as much as you need and you should converge to a good focus as shown in the video. Sometimes I skip a step because it is unnecessary (i.e. the object is already partially in focus).
And here are the effects of the individual movements when using the Scheimpflug adaptor:
But Where’s the Lens Plane?
This one threw me off for a while, so I expect not everyone would be familiar with this. Let’s say you’re trying to design a Scheimpflug system and you are using regular camera lenses (i.e., a Nikon/Canon lens). These lenses contain multiple elements, so it is not straightforward what is the definition of “focal length” that the lens is rated for, and most importantly, where this “effective lens” lies in physical space.
This reference and many others provide formulas for finding the effective focal length (EFL) or multiple lens arrangements. If the link dies, here’s the equation for a two-lens arrangement:
The effective focal length depends on the two lenses focal distance (f1 and f2) as well as in the distance between the two lenses (d). But most importantly, you can swap f1 and f2 (say, if you flipped the lens arrangement) and the EFL will remain the same. This is usually the case in multiple lens arrangements, and this is why most DSLR lenses will be rated for a single focal length, which is their effective focal length.
The EFL can be regarded as a means to replace the complex lens arrangement with a single thin lens. But where is that “effective lens” in physical space? Well, that is a rather difficult question because most lenses will still have an adjustment ring for their focal distance. So, let’s start with a lens focusing at infinity.
Focusing at infinity is the same as assuming parallel rays are incoming to the lens. This means these parallel rays will form a sharp point exactly at the lens focal point (by definition). Well, if a compound lens is set to focus at infinity (most lenses will have an adjustment where you can focus at infinity) then that point must lie on the camera sensor. Therefore, this thin lens must be exactly its focal distance from the image sensor of the camera. If now we know the camera’s Flange Focal Distance (FFD), then we know exactly where this “Effective Lens” is sitting at with respect to the camera flange, as shown in the drawing below. For example, this FFD is 46.5mm in a Nikon camera. A comprehensive list for many cameras is found here. Also, as a bonus, the Phantom v2012 high speed camera has FFD=45.8mm when using the factory standard Nikon F-mount adaptor flange.
If now we change the focus ring of our 50mm lens to focus, say, at 500 mm distance. Then we can use the thin lens formula:
And find that for o=500 mm and f=50 mm we get i=55.5 mm. Therefore, the thin lens moved 5.5 mm away from the sensor to focus at 500 mm instead of infinity. If you look carefully, a lens will move farther from the sensor as we bring the focus closer:
Good. So this means that if we want to do some fancier photography techniques (like using the Scheimpflug principle), we can now use the EFL and its relationship to the FFD to calculate our Scheimpflug adaptor and the Scheimpflug angle needed to focus at a particular feature. Remember, in most practical setups the Scheimpflug adaptor will act as a spacer, thus preventing the lens from focusing at infinity. The more space added, the closer this “far limit” gets and the harder it becomes to work with subjects placed far from the camera.
Scheimpflug Principle in 3D [2-DOF]
So this was all under the 2D assumption, where we only need to tilt the lens in order to get the plane in focus. Easy enough for explanations, but you don’t really find that case very often in practice. If the object plane is tilted in the other direction (in 3D) we’ll need to compensate for that angle, too. That can be done by “swinging” the lens tilt axis. In a tilt-swing adaptor, there are two degrees of freedom for the lens angle. The “tilt” degree of freedom allows the lens to tilt as previously described. The “swing” degree of freedom swivels the lens around the camera, changing the orientation of the focal plane with respect to the camera. A little stop-motion animation, below, shows how these two angles change the orientation of the lens on the camera:
Or, if you’re a fan of David Ghetta, you might be more inclined to like the following animation (use headphones for this one):
When doing it in practice, however, it is rather difficult to deal with the two degrees of freedom. In my experience, the causes for confusion are:
The object plane is static, and the camera is moving, but the movement is done with the lens first – this messes a little bit with the brain!
When you tilt the lens, you need to move the camera back to see the subject because now the lens is pointing away from the object plane;
It is rather hard to know if it is the tilt angle or the swing angle that needs adjustment in a fully 3D setup
It is hard to know if you overshot the tilt angle when the swing angle is wrong, but it’s also difficult to pinpoint which one is wrong.
This compounds to endless and painful hours (yes, hours) of adjustment in an experimental apparatus – especially if you’re not sure of what exactly you’re looking for. Different than most professional photographers, it is usual in Particle Image Velocimetry to have rather shallow depth of field because we want to zoom a lot (like, using a telephoto 180mm lens to look at something 500mm from the camera) and we need very small f numbers to have enough light to see anything. Usual DoF’s are less than 5mm and the camera angle is usually very large (at least 30º). But enough of the rant. Let’s get to the solution:
First we need to realize that most Scheimpflug adaptors have orthogonal tilt / swing angle adjustments. In other words, the tilt and swing angles define a spherical coordinate system uniquely. This means there is only one solution to the Scheimpflug problem that will place the plane of focus in the desired location. With that said, it would be great if the solution for one of the angles (i.e., the swing angle) could be found independently of the other angle, because that would reduce the problem to the 2D problem described before.
The good news are that, in most setups, that can be done. To find the correct location of the swing angle:
Get the normal vector of the target in-focus plane;
Get the normal vector of the camera sensor;
These two vectors form a plane. This is the “tilt plane”.
We need the lens to tilt in this plane. To do so, the lens tilt axis needs to be normal to the “tilt plane”.
Adjust the Scheimpflug swing such that the lens swivel axis is perpendicular to the “tilt plane”. That will be a “first guess” to the Scheimpflug swing. A solution is expected now, as you adjust the lens tilt. Or something very close to a solution, at least.
In practice there’s another complication related to the camera tripod swivel angle. If the axis the tripod is swiveling is not coincident with the axis of the “tilt plane”, then the problem is not 2D. That can be solved in most cases by aligning the camera again. But if that is not possible, usually it will require a few extra iterations on the “swing angle”, also.
Well, these definitions might be a little fuzzy in text. I prepared a little video where I go through this process in 2D [1-DOF] and 3D [2-DOF]. The video is available below.
Well, I hope these notes help you better understand the Scheimpflug adaptor and be more effective when doing adjustments in your photography endeavors. In practice it is almost an “art” to adjust these adaptors, so I think an algorithmic procedure always helps speeding up things. Especially because these devices are mostly a tool for a greater purpose, so we are not really willing to spend too much time on them anyways.
Vortex core tracking is a rather niche task in fluid mechanics that is somewhat daunting for the uninitiated in data analysis. The Matlab implementation by Sebastian Endrikat (thanks!), which can be found here, inspired me to dive a little deeper. His implementation is based on the paper “Combining PIV, POD and vortex identification algorithms for the study of unsteady turbulent swirling flows” by Laurent Graftieaux, which was probably one of the first to perform vortex tracking from realistic PIV fields. The challenge is that when PIV is used, noise is introduced in the velocity fields due to the uncertainties related to the cross-correlation algorithm that tracks the particles. This noise, added to the fine-scale turbulence inherent to any realistic flow field encountered in experiments, makes vortex tracking through derivative-based techniques (such as λ2, Q criterion and vorticity) pretty much impossible.
Computational results are less prone to this effect of the noise and usually are tamer in regards to vortex tracking, though fine-scale turbulence can also be a problem. The three-dimensionality of flow fields doesn’t help. But many relevant flow fields can be “deemed” vortex dominated, where an obvious vortex core is present in the mean. Wingtip vortices are a great example of these vortex-dominated flow fields, though there are many other examples in research from pretty much any lift-generating surface.
As part of my PhD research I’m performing high speed PIV (Particle Image Velocimetry) on the wake of a cylinder with a slanted back (maybe a post later about that?). This geometry has a flow field that shares similarities with military cargo aircraft, but is far enough from the application to be used in publicly-available academic research. The cool part is that it forms a vortex pair, which is known to “wander”. The beauty of having bleeding-edge research equipment is that we can visualize these vortices experimentally in a wind tunnel. But how to turn that into actual data and understanding?
That’s where the Gamma 1 tracking comes into play. Gamma 1 is great because it’s an integral quantity. It is also very simple to describe and understand: If I have a vector field and I’m at the vortex core, I can define a vector from me to any point in this vector field (this vector is called by Graftieaux, “PM“). The angle between this vector and the velocity vector at that arbitrary point would be exactly 90º if the vortex was ideal and I was at the vortex core. Otherwise, it would be another angle. So if I just look at many vectors around me I just need to find the mean of the sine of these two vectors. This quantity should peak at the vortex core. That’s Gamma 1, brilliant!
Sebastian Endrikat did a pretty good job at implementing Graftieaux’s results, and I used his code a lot. But since each run I have has at least 5000 velocity fields, his code was taking waaaay too long. Each field would take 4.5 seconds to parse in a pretty decent machine! So I decided to look back at the math. And I realized that the same task can be accomplished by two convolutions after some juggling. A write-up of that is below:
Yes – You can go to Amazon.com today and buy one of these gimmicky toys that float a magnet in the air. Some of which will even float a circuit that can light an LED and become a floating light bulb. A floating light bulb that powers on with wireless energy? What a time to live!
A quote from Arthur C. Clarke, who wrote “The Sentinel” (which later on became the basis for the science fiction movie “2001: A Space Odyssey”), goes along the lines:
“Any sufficiently advanced technology is indistinguishable from magic.”
This is what led me to the Engineering path. Because, if the advanced technology is indistinguishable from magic, who creates the technology is a real-life wizard. Who creates the technology? The engineers and scientists all around this world. So let me complement his quote with my own thoughts:
“Any sufficiently advanced technology is indistinguishable from magic. Therefore, engineers and scientists are the true real-life wizards.“
Of course, if I’m writing about it is because I went through the engineering exercise. And boy, I thought it was an “easy” project. You see these projects of floating stuff around the internet, but nobody speaks about what goes wrong. So here we’ll explore why people spend so much time tweaking their setup and what are the traps along the way.
But first, some results to motivate you to read further:
Prof. Christian Hubicki was kind enough to let me pursue this as a graduate course project in Advanced Control Systems class at FSU, so I ended up with a “project report” on it. It is in the link below:
But if you don’t want to read all of that, here’s a list of practical traps I learned during this project:
DON’T try fancy control techniques if you don’t have fast and accurate hardware. This project WILL require you more than a 10-bit ADC and more than 3-5 kS/s. The dynamics are very fast because the solenoid is fast. And you want a fast solenoid to be able to control the levitating object! Unless you can have a large solenoid inductance and a rise time in the order of ~100ms, there’s no way an Arduino implementation can control this. I’m think a nice real-time DAQ controller (like the ones offered by NI) could work here. But an Arduino is just too strapped in specs to cut it! The effects of sampling and digitization are too restrictive. It MIGHT work in some specific configurations, but it is not a general solution (and certainly it didn’t work for me).
Analog circuits are fast – why not use them? Everyone (in the hobby electronics world) thinks Arduino is a silver bullet for everything. Don’t forget an op-amp is 100’s of times faster than a digital circuit!
Bang-Bang! You see many implementations in the web use a hysteresis (or bang-bang) controller. The bang-bang controller is ideal for cheap projects because it deals well with the non-linearities gracefully. But it is not bullet-proof either. It will become unstable even with high bandwidth in some cases if the non-linearity is strong enough.
Temperature Effects: The dynamic characteristics of your solenoid will change as it heats up (you’re dissipating power to turn it on!). So it can get very confusing if you have, say, a PID controller, to tune the gains because the gains will be different depending on the temperature of the coil. Since this effect is very slow (order 10 minutes!) it can result in you chasing your own tail because you’re tuning a plant that is changing with time!
The wireless TX introduces noise! This one is particular to this project: If you’re using a Hall effect sensor to sense the presence of the floating object (by its magnetic field), then your Hall sensor will also measure the solenoid strength! Apart from that, the TX is also generating a high-frequency magnetic field, which will also be measured in the Hall effect sensor signal. The effect of the TX is very small (~2mV) buy it appears in the scope. The problem is that Arduinos don’t have low-pass filtering in their ADC inputs. So anything higher than the sampling rate will appear as an “Aliased” signal, which is very nasty to deal with.
Make sure your solenoid can lift your object and more. This is an obvious one but I think it is easy to overlook because you need to over-design it. I designed my solenoid to lift 100 grams of weight. But in the end, I could only work with 35 grams because the controller needed a lot of space to work. So overdesign is really crucial here. I ended up shaving a lot of mass from the floating object because I couldn’t lift the original design’s mass!
I’d like to put a more complete tutorial on making this, but since I already invested a lot of time in putting the report together, I think if you put some time on reading it and the conclusions from the measurements/simulations you will be able to reproduce this design or adapt the concepts to your design. Let me know if you think this was useful or maybe if you need any help!
For the ones not introduced to the art of Schlieren photography, I can assure you it was incredibly eye-opening and fascinating to me when I learned that we can see thin air with just a few lenses (or even just one mirror as Josh The Engineer demonstrated here on a hobby setup).
For the initiated in the technique, its uses are obvious in the art and engineering of bleeding-edge aerodynamic technology. Supersonic flows are the favorites here, because the presence of shock waves that make for beautiful crisp images and help us understand and describe many kinds of fluid dynamics phenomena.
What I’m going to describe in this article is a very simple circuit published by Christian Willert here but that most likely is paywalled and might have too much formalism for someone who is just looking for some answers. Since the circuit and the electrical engineering is pretty basic, I felt I (with my hobby-level electronics knowledge) could give it a go and I think you also should. I am also publishing my EasyEDA project if you want to make your boards (Yes, EasyEDA).
But first, let’s address the elephant in the room: Why should you care? Well, if you ever tinkered with a Schlieren/shadowgraph apparatus – for scientific, engineering or artistic purposes -you might be interested in taking sharper pictures. Obtaining sharper pictures of moving stuff works exactly like in regular photography. They can be achieved by reducing the aperture of the lens, by reducing the exposure time or by using a flash. The latter is when a pulsed light source really shines (pun intended!). The great part here is that the first two options involve reducing the amount of light – whereas the last option doesn’t (necessarily).
The not-so-great part is that camera sensors are “integrators”. This means they measure the amount of photons that happened to be absorbed given an amount of time. Therefore, what really matters is the total amount of photons you sent to the camera. Of course, if you sent an insanely large amount of photons in a very short instant, you would risk burning the camera sensor – but if you’re using an LED (as we are going to here), your LED will be long gone before that happens.
So the secret for high speed photography is to have insanely large amounts of light dispensed at once. That would guarantee everything will be as sharp as your optics allow. Since we don’t live in the world of mathematical idealizations, we cannot deliver anything “instantly”, and therefore we have to live with some finite amount of time. Brief enough is relative and depends on what you want to observe. For example, if you’re taking a selfie in a party, probably tens of milliseconds is brief enough to get sharp images. For taking a picture of a tennis player doing a high speed serve, you’re probably fine with tens or hundreds of microseconds. The technical challenges begin to appear when you’re taking pictures of really fast stuff (like supersonic planes) or at larger magnifications. The picture of the jet above is challenging in both ways: its magnification level is 0.7x (meaning the physical object is as projected in the sensor at 0.7x scale) and its speed is roughly 500 meters per second. In other words, the movement of the object (the Schlieren object) is happening at roughly 63.6 million px/second, which requires a really fast shutter to have any hopes to “freeze the flow”. If you’re fond to making simple multiplications in your calculator, the equation is very simple:
Where is the object displacement in px/second, is its velocity in physical units (i.e. m/s), is the magnification achieved in the setup and is the physical pixel size of your camera (i.e. for a Nikon D90).
I know, I know. These are very specialized applications. But who knows which kinds of high speed photography is happening right now in someone’s garage, right? The point is – getting a light source that is fast enough is very challenging. Some options, such as laser-pulsed plasma light sources, can get really expensive even if you make them yourself. But LEDs are a very well-established, reliable technology that has an incredibly fast rise time. And they can get very bright, too (well… kinda).
So what Willert and his coauthors did was very simple: Let’s overdrive a bright LED with 20 times its design current and hope they don’t explode. Spoiler alert: Some LEDs didn’t survive this intellectual journey. But they mapped the safe operational regions for overdriven LEDs of many different manufacturers. To name a few: Luminus Phlatlight CBT-120, Luminus Phlatlight CBT-140, Phillips LXHL-PM02, among others. These are raw LEDs, no driver included, rated for ~3.6-4V, and are incredibly expensive for an LED. The price ranges from $100 to $150, and they are usually employed in automotive applications. The powerful flash is, however, blinding. And if they do burn out, it can be harmful for the hobbyist’s pockets.
The driver circuit (which is available here) is very simple: An IRF3805 N-channel power MOSFET just connects the LED to a 24V power supply. Remembering the LED is rated for 4V – so it’s going to get a tiny bit brighter (sarcasm). Jokes apart, the LED (CBT-140) is rated for 28A continuous with very efficient heatsinking, which means we will definitely be overdriven. By how much we can measure with R2. Hooking a scope between Q1 and R2 is not harmful to the scope and allows to measure the current going through the LED (unless the currents exceed ~600A, then the voltage spike when the MOSFET turns off might be on the few tens of volts). We don’t want to operate at these currents anyways, because the LED will end up as in the figure below. There’s a trim pot (R3) that controls the MOSFET gate voltage, make sure pin 2 of U1 is giving a low voltage when tuning.
What is really happening is that C1 and C2 (C2 is optional) are being charged by the 24V power supply when the MOSFET is off. Then they discharge at the LED when the MOSFET is activated. No power supply will be able to push 200A continuously through an LED, so if the transistor turns on for too long, the power supply voltage will drop and the power supply will reset. Actually, this is one of the ways to tell if you melted the MOSFET (which happened to me once). The MOSFET needs to turn on in nanoseconds, which will require a decent amount of current (like 4-5 amps) just to charge the gate up. This means we need a driver IC – which in this case I’m using a UCC27424. Make sure to have as little resistance between the driver and the gate to minimize the time constant. The 1.5 Ohms is very close to giving 4A to the MOSFET. Since the gate capacitance is around 8nF, the MOSFET gate rise time is somewhat slow (12 ns).
Speaking about time constants, during the design I realized the time constants of the capacitor that discharges into the LED and the parasitic inductances in the path between the components will dictate the rise time of the circuit, at least for the most part. In my circuit, the time constant was measured to be 100ns, directly with a photodiode. This means we can do >1MHz photography, which is pretty amazing! Unfortunately the cameras that are capable of 1 million frames per second aren’t really accessible to mortals (except when said mortals work in a laboratory that happens to have them!).
Well, the LED driver circuit is still in development – which means I’ll keep this post updated every now and then. But for now, it’s working well enough. The BOM cost is not too intimidating (~$60 at Digikey without the LED. Add the LED and we should be at ~$200), so a hobbyist can really justify this investment if it means an equivalent amount of hours of fun! Furthermore, this circuit implements a microcontroller that monitors and displays the LED and driver’s temperature. It features an auto shut-off, which disables the MOSFET driver if the temperature exceeds an operational threshold. The thermal limits are still to be evaluated, though.
For now, I did my own independent tests, and the results are very promising. Below I’m showing a test rig to evaluate the illumination rise and fall times of the LED. The photodiode is a Thorlabs (forgot the model) that has a 1ns rise time if attached to a 50 ohm load. It’s internally biased, which is nice when you want to do a quick test.
The results from the illumination standpoint are rather promising. Below a series of scope traces show that the LED lights up in a very short time and reaches a pretty much constant on state. The decay time, however, seems to be controlled by a phosphorescence mechanism that is probably because this is a white LED. Nevertheless, the pulses are remarkably brief.
The good thing about having high speed cameras is that now we’re ready to roll some experiments. By far, my favorite one is shown below. I was able to use the Schlieren setup to observe ultrasonic acoustic waves at 80kHz , produced by a micro-impinging jet (the jet is 2mm in diameter). The jet is supersonic, its velocity is estimated to be 400 m/s. Just to make sure you get what is in the video: The gray rectangle above is the nozzle. The shiny white line at the bottom is the impingement surface. The jet is impinging downwards, at the center of the image. The acoustic waves are the vertically traveling lines of bright and dark pixels. I was literally able to see sound! How cool is that?
Just as a final note. You might be discouraged to know that I am one of these mortals that happen to have access to a high-speed camera. But bear in mind, these pictures could have been taken with a regular DSLR. The only difference is that the frame sequence wouldn’t look continuous, because the DSLR frame rate is not synchronized with the phenomenon. Apart from that, everything else would be the same. You should give it a try! If you do, please let me know =)