Putting an errorbar in your money

I, and also an increasingly larger population of the world, have concerned myself and dedicated countless hours to deliberating about the unfortunate fact of life that as we grow older, we eventually might not be able to provide for ourselves due to the natural degradation of our bodies. The capitalist society, however, provides us with the choice of converting our human capital (i.e., ability to work) into assets (money, stock, real estate) that can be used in the future to keep us going even when we lose the ability to work.

Therefore, it is important (and heavily underappreciated) to put aside a portion of your hard-earned capital for when those hard times come. Apparently, however, human psychology does not align very well with this rational argument. We naturally find ourselves jeopardizing these long-term goals by enjoying ourselves too much when we’re young and active, to the point of entering debt to buy the latest gadget.

You see, the consumerist culture of capitalism and the necessity for saving for the future are not mutually-exclusive behaviors and a reasonably intelligent and disciplined person should be able to consume goods and promote the advancement of society through the fostering of competition and the funding of technological research that is one of the greatest achievements of consumerist capitalism. It is a matter of fact, though, that this consumerist nature tempts the less rational part of ourselves to all sorts of dubious behaviors from the financial planning perspective. Thus, many different countries institutionalized retirement savings as mandatory through social security. The optimality of this solution is questionable, but under the terms of a “greater good” goal function it definitely is a sensible decision.

The collection of such a large pool of retirement capital under the management of a single countrywide institution has benefits and caveats that are rather important to be discussed. High levels of money management specialization should be expected from such an institution, given that the best in their field can be afforded to manage such a large asset. On the other hand, effectiveness of highly specialized money managers working with institutional capital against buying and holding the market has been demonstrated not to be corroborated by the data [as very well explained by Benjamin Felix, references in the video]. It is also reasonable that dilution of many behavioral and idiosyncratic market risks is possible with a larger pool of capital, averaging out the effects of spurious market movements. In third-world countries like the one I come from [Brazil], however, there is lesser trust in the effectiveness of the management of these funds as the transparency of social security data is low and a lot of room is available for dishonest behavior. I personally see social security in such environments as another “tax”, which does pay back in the long term but is prone to mismanagement and corruption.

Don’t retire early

With my stance in the argument set, I believe that regardless of social security one should save personal funds for discretionary retirement. You see, differently from people of the FIRE movement [Financial Independence, Retire Early], I believe careful selection of your professional career path during your 20’s should be enough to provide sufficient personal satisfaction from your job such that you wouldn’t need – or want – to retire early. If one’s job is fulfilling and provides them with a sense of contribution to society, why would they trade it off for “enjoying life” by doing absolutely nothing useful? Obviously, enjoying vacation trips now and then is important for a healthy life balance, but I’d say that would become boring rather fast if that was the only thing you did for a couple decades.

Granted you chose a fulfilling career, it is sensible to keep contributing to society for as long as we physically and mentally can. If you did not, consider changing while you can. Even if it is financially less rewarding, in the long run you’ll keep it up for much longer. And the fact that you enjoy what you do usually makes you willing to spend the time to do the “extra mile”, which is key to becoming respected in your area.

Index funds

Nobel laureate Eugene Fama showed through his research the evidence that, in an efficient market, actively managing your money gives you no statistical edge against an investor that simply buys and holds the market. Fama also shows that there are some specific factors that have reasonable theoretical foundations and explain the gains of the market as a whole. Their description of the “Three-factor model” shows that regression fitting of historical stock pricing data can explain the performance of asset portfolios by three factors: The market factor would be a “premium” for investing in the higher-risk market; The size factor would be a premium for investing in higher-risk smaller capitalization stocks; and the value factor would be a premium for investing in companies that possess a higher book-to-market ratio. I confess I don’t fully understand the theoretical justification for increased average returns for the higher risk stocks. One thing that I find reasonable, in my personal ignorance of the financial market, is that the market factor justifies itself as long as we have large positive macroeconomic movements (i.e., as long as population grows, total amount of goods produced increases through more technology, etc.). I think it is a rather important limitation of Fama’s model that we need these macroeconomic movements to occur in order to have our stocks grow long term, and that major macroeconomic downturns are not impossible in the future if catastrophic events occur. Due to the unlikelihood of these events and the hopelessness of safeguarding from them even if they do occur, I still believe investing is a reasonable strategy.

Fama’s research sparked the composition of potentially one of the greatest tools for financial investing: Index funds. Though index funds have been around for several decades now, index plus other factor-related funds have popped into existence with incredibly small fees and the liquidity of a stock through exchange-trading. This allows small, individual investors, to decide for themselves their investment strategy and their risk tolerance in a DIY approach. If you’ve seen any other post from this blog, you know I love DIY!

the largest casino in the world

When I was 18 years old I had my first time experience with the stock market. After playing with it for a few months, I concluded – in my naive view of the world of then – that the stock market is just institutionalized gambling. The emotions you feel when your money fluctuates in the market are rather bewildering and I honestly experienced some real adrenaline pump while binge-watching my long positions fluctuating with the Market’s tide. It all seemed random, though. I tried looking for patterns, learned technical analysis and applied it as a guide to my investments. But after getting deeply acquainted with it, I felt like I was just finding patterns in randomness as we do when we see faces in clouds or stick figures made from stars in the sky. These patterns appeared to have the same predictive power of flipping a quarter. After that experience, I decided that I would not touch stocks ever again in my life.

Academic research really helped me to have a more sober view of the market. The outreach work by Ben Felix also helped me see through the bullshit of financial channels and blogs in the internet. I felt, after what was pretty much a decade, more prepared to give it another shot. The knowledge of statistics, scientific bias, data analysis and just plain critical thinking developed through higher education were instrumental in the establishment of my current, totally non-expert opinion of the financial market. So I decided to write this and share some of my humble data analysis results in the hopes that other people might find it “dumbed down” enough to give it a go. I still confess that some of the papers by Fama and French are still over my head due to sheer academic jargon and encoding.

As I glanced before, it is worth the exercise to ask ourselves why in the first place it makes sense to invest in the market. Why does the stock market seem to grow ever higher in value? Where is the wealth being generated? Is the market a zero-sum game? If so, who is losing money?

These questions still linger in my head, to be honest. We need to address what is a zero sum game, I think, to get started. A zero-sum game is just description for some systems where the total amount of a token is conserved such that only transfers of that token are possible between the players. This means, no “token” is being created out of nowhere. All games in casinos are zero-sum, for example. They involve the players putting their money in a pot, and the results of the game determine how much of that pot is distributed to the winners/losers. Usually in a casino, the game is such that the “house” has a slight statistical edge and will, over thousands of rounds, accumulate wealth. Since the game is zero-sum, that wealth must come from the players of the game. We have then, a very good distinction between “investing” and “gambling”. While both endeavors are risky and statistical in nature, “gambling” is a zero-sum game. “Investing”, on the other hand, is a positive-sum game.

But how is this even possible? How can one create money out of thin air? Well, surely the Federal Reserve in the US (and their equivalent in other countries) do, right? That makes the game positive-sum because now money has been created out of thin air. Well, not really. Though the total numerical amount of money might be larger due to “materialization” of money, no actual wealth was created by doing so.

This brings us to an important point in investing. What does money mean? What is the nature of wealth? Well, I don’t pretend to know the answer of these questions. My readings lead me to believe that money is a token that is institutionalized in our governments through thousands of years of iterations. It seems to be a natural manifestation of society. Instead of trading goods directly, we use the money token as a convenience. It by itself only stores value because everyone agrees it has value. Without diverting too much on why money has value, one can meditate that a way one can earn money, and therefore generate value, is through work. Careful application of one’s time and expertise to transform raw materials into more useful devices, goods or other consumables is a reasonable means of earning money. Let’s take the example of a material good, say, a chair. A chair stores value within itself because it is a useful device that allows humans to comfortably sit while they’re doing less involved activities or just enjoying themselves. It retains its value over time, because it keeps on accomplishing that task for a relatively long period until it finally decays to the point of becoming undesirable.

In the case of the chair, the people involved in the process of harvesting the naturally-ocurring materials to build it, cutting them into shapes that embody the function of the chair, and finally putting it together, need to be compensated for their time in doing so. Furthermore, the people involved in auxiliary services such as delivery, selling, handling, managing and others also will have spent a small fraction of their time in the particular chair you’re sitting while reading this, for which they also need to be compensated. Their time, therefore, is stored in the value of the chair. And you, when making its purchase, is willing to pay your earned money to have it. Of course, your function in society also does produce tangible or intangible goods in some sense, and your time is compensated such that you can afford to pay for it.

Through this reasoning I believe we can establish that goods and services store value and the production of such goods and services is how wealth (and therefore, money) is created. Some goods will last for longer, thus storing wealth for a longer time. Others, however, will last for very little time before spoiling (i.e., foods) or destroying themselves, thereby retaining their value for less time. This means that wealth is also destroyed over time, and in order to have a net positive wealth generation people need to be producing more value than the value that is naturally destroyed over time. I would say that a key requirement for this to happen is that populations keep on growing, because that would increase the overall demand for goods and services.

My current understanding based on this argument is that money is just an agreed-upon representation of people’s productive time. This representation is also useful to quantify the impact of one’s relative productiveness, since some people earn more money for the same amount of time invested in contributing for society. I’m not claiming that this is a fair representation, but the dynamics of market offer/demand should to at least some extent dictate the relative usefulness of people’s contributions. The efficiency of the job market is a point that I myself haven’t researched too much into, however. But in a sense, this is why it is somewhat accepted that there is some positive correlation between individual wealth production and their relative contribution to society (i.e., the dichotomy between highly regarded jobs such as doctors, engineers, etc.; versus lower-waged jobs like the exploited workers of fast food restaurants and supermarkets). But I think this is a highly controversial topic to be discussed here, because I don’t believe that people deliberately want to be useless in society.

So, HOW MUCH IS the errorbar?

Ok, this was a lot of meditation about capitalism. For personal financial decision-making, I’m sure none of that is necessary. What I really wanted to share, though, is my underwhelming observations of historical data. You see, if one believes index fund investing is a viable alternative for not only keeping their money value but also increasing it over time, then the evidence should point out to a mean effective growth of value over time, net of inflation effects, right? Well, though that has already been proven from numerous papers, I wanted to also give it a go. So let’s take the historical S&P500 index data as a benchmark for data analysis. However, the S&P500 index does not account for inflation. So the first step is to remove inflationary effects. If we do that, we get the following chart:

Inflation-adjusted S&P 500 index (computed on 12/05/2019). Data sources: S&P500Inflation

Interestingly, the chart indicates about 6 times growth in the index over the course of 90 years. As of the time of this writing, the US markets are regarded to be in a “bull run”, which obviously needs to be taken into account. But I’d say that everyone agrees that, on average, there is indeed an overall trend of growth even after inflation correction. For comparison, the first data point of the series in December of 1927 shows an index value of 17.66 before correction and 262.3 after corrected to 2019 money.

So there’s a mean growth. But when we are buying into stock, we generally do not know where exactly we are sitting in this curve. Maybe now we’re at a peak? Maybe not, maybe we are still on the rise and the next crash will be way past 4000 points. The point is, we don’t know a priori and we can’t know. Especially for us peasants that are not involved in finance, it is a waste of our valuable human asset and skill in other fields of knowledge to attempt to predict that. The practical question for non-specialists is more about whether, statistically speaking, there is an expected return (which seems to be the case from the figure above) but also, what is the amplitude of the other outcomes (i.e., good and bad). This is what I mean by putting an errorbar in your money. Every time you look at the stock market, the nominal value of your holdings is volatile – there’s some fluctuation, or noise, to it. The question I want to personally answer by analyzing the data here is, how much is that noise?

It is reasonable to expect this noise would be changing over time. Fluctuations on the daily basis should be small, but larger excursions should be expected over time, both for bull runs and bear runs. So this question only makes sense to me under a specified time horizon. We can analyze the historical data with different time horizons, then. If we look at, say, a week time horizon, we can look at any arbitrary pair of dates 7 days apart and see the return % distribution. Then we look at the return time series and average the returns, to get a mean return over a week. We can also look at statistical properties like standard deviation and percentile values, which would give us the size of that “errorbar”. So I’ve done that. The results over overlapping periods between 7 days and 40 years look like this:

Average and deviations of returns of the S&P500 index over time windows of various lengths.

Interesting observations can be made over the long term with the chart above: A mean trend of positive returns is expected over the course of 40 years. You should expect to triple your money (200% return), inflation corrected, over 40 years. Not as much as I hoped for, to be honest, but also not that bad. It gives, to me, a very good sense of how much saving money now will be worth when I retire. This also gives grounds for decision making, which is awesome!

Furthermore, observe in the chart above that in about 30 years some positive return is not only expected, but 99% guaranteed. It takes that long of a wait. This gives a good sense of the investment horizons we are talking about here.

Unfortunately a logarithmic scale can only be used in the time axis, as negative returns cannot be plotted in logarithmic scales. Therefore, the returns over periods less than an year are rather difficult to observe. So the chart below shows a zoomed version of the data, from 7 days to 1 year. We can see the growth of the “errorbar size”. Within a week, the standard deviation is 2.8% and the 1-99 percentile encompasses returns between -8.2% and +7.5%. In a month, the standard deviation grows to 5.8% and the 1-99 percentile now encompasses returns between -16% and +13%. Within an year, the 1-99 percentile grows to between -43% and +55%. Even though the mean of the returns is always positive (over an year it’s +4.54%) it is interesting to see that the distribution has a slightly larger negative tail. This shows how emotional responses tend to affect negative movements of the market in the short term more strongly.

Average and deviations of returns of the S&P500 index over time – zoom in the short term.

Another interesting observation I made with this data is displayed in the animated GIF below. It is interesting to see how the probability distribution is pretty much a normal distribution for periods less than 4 months, losing its character as the periods grow longer. For an year the distribution is more triangle-shaped, and for over 3 years it starts to morph into a long positive tail. The fact that the distribution looks exactly like a random normal distribution for the short period (i.e., less than a quarter) demonstrates the amount of time that companies need to realize gains. It also delineates the change between stock market gambling over the short term versus actual generation of wealth over the long run.

Probability distribution of gains of the S&P500 for increasingly long time windows

Conclusion

This personal analysis for me is very captivating evidence that the stock market is a positive-sum game. I know this is limited to the U.S. market and that the political hegemony of the U.S. is probably biasing the results to a positive conclusion, which might not be true in the long run. Nevertheless, I believe for my short little lifespan it might still be of somewhat valid, empirical application. The statistical distribution of gains makes me more resilient to market downturns now, since I know what kinds of movements to expect in the short term, which unfortunately are rather large.

As a matter of practical mnemonic, I’d say 2 standard deviations is enough to capture the expected movement of the short-term market. This would mean weekly movements are expected to fall within about ±5%, monthly movements should be within ±10%, quarterly movements about ±20% and yearly movements about ±40%. It’s a large errorbar to put in your money, but one that has to be done if any positive expected return, inflation corrected, is desired.

I hope you also got inspired to look at the data yourself. If that’s the case, have a look at my code in Github. It is a simple code and I’ve done some simplifications for the sake of analysis. Nevertheless I think the conclusions are quite valid. Hope you’ve learned something!

Scheimpflug – Tilt-swing adjustment in practice

As a disclaimer, I’m applying this technique in a scientific setting, but I’m sure the same exact problem arises when doing general macro photography. So, first, what is a Scheimpff…. plug?

Scheimpflug is actually the last name of Theodor Scheimpflug, who apparently described (not for the first time) a method for perspective correction in aerial photographs. This method is apparently called by many as “the Scheimpflug principle”, and is a fundamental tool in professional photography to adjust the tilt of the focal plane with respect to the camera sensor plane. It is especially critical in applications where the depth of field is very shallow, such as in macro photography.

As an experimental aerodynamicist, I like to think of myself as a professional photographer (and in many instances we are actually more well-equipped than most professional photographers in regards to technique, refinement and equipment, I reckon). One of the most obnoxious challenges that occurs time and again in wind tunnel photography is the adjustment of the Scheimpflug adapter, which is the theme of this article. God, it is a pain.

 

What is in focus?

First let’s start to define what is “being in focus”. It is not very straightforward because it involves a “fudge factor”, called the “circle of confusion”. The gif below, generated with the online web app “Ray Optics Simulator“, shows how this concept works. Imagine that the point source in the center of the image is the sharpest thing you can see in the field of view. It could be anything: The edge of the text written in a paper, the contrast between a leaf edge against the background in a tree, the edge of a hair, or in the case of experimental fluid dynamics, the image of a fog particle in the flow field. No matter what it is, it represents a point-like source of light and technically any object in the scene could be represented as a dense collection of point light sources.

If the lens (double arrows in the figure below) is ideal and its axis is mounted perpendicular to the camera sensor, the image of the point source will try to converge to a single point. If the point source and the lens are in the perfect distances to each other (following the lens equation), the size of the point is going to be as infinitesimal as the source, and the point image in the sensor will be mathematically sharp.

01 - PointInFocus.gif
Point source observed in the camera sensor.

However, nothing is perfect in reality, which means we have to accept that the lens equation might not be perfectly satisfied for all the points in the subject, as that can only happen for an infinitesimally thin plane in the subject side. In the case the lens equation is not satisfied (i.e., as the dot moves in the subject side as shown in the animated gif), the image of the point source will look like a miniature image of the lens in the camera sensor plane. If the lens is a circle, then the image will look like a circle. This circle is the circle of confusion, i.e., the perfect point in the object side is “confused” by a circle in the image side.

 

The Aperture Effect

The presence of an aperture between the lens and the camera sensor change things a bit. The aperture cuts the light coming from the lens, effectively reducing the size of the circle of confusion. The animation below shows the circle of confusion being reduced in size when the aperture is closed. This allows the photographer to perform a trade off: If the circle of confusion is smaller, the image is acceptably sharp for a larger depth, increasing the depth of focus. But if the light is being cut off, then light is being lost and the image becomes darker, requiring more exposure or a more sensitive sensor. The markings on the side of the lens for different aperture openings (f/3.3, f/5, etc.) indicate the equivalent, or “effective” lens f-number used after the aperture was applied. Since the lens focal length cannot be changed, the equivalent lens is always smaller in diameter and therefore gathers less light. The shape of the “circle of confusion” usually also changes when using an aperture, as most irises are n-gons instead of circles. This effect is called “bokeh” and can be used in artistic photography.

2019-08-06 11-01-09.gif

Effect of the aperture on the circle of confusion.

 

Focusing on a Plane

Hopefully all of this makes more sense now. Now let’s make our example more complex and make two point sources, representing a line (or a plane) that we want to be in focus. We’ll start with the plane in focus, which means both points are at the same distance to the lens. Tilting the plane will make the circle of confusion of the plane edges grow (in the gif below, tilting the plane is represented by moving one of the points back and forth). This will result in a sharp edge on one side of the plane and a blurry edge on the other side.

02 - PlaneInFocus
Effect of tilting the object plane in the camera focus

The effect you get is usually seen in practice as the gradual blurring, as for example the image below shows. It becomes blurry because the circle of confusion is growing, but how much can it grow before we notice it? It depends how we define “noticing”. An “ultimate” reference size for the circle of confusion is the pixel size of the camera sensor. For example, the Nikon D5 (a mid-high level professional camera) has a pixel of around 6.45μm size. Cameras used in aerodynamics have pixels on that order (for example, a LaVision sCMOS camera has a 5.5μm pixel as of 2019). High speed cameras such as the Phantom v2012 will have much larger pixels (28μm) for enhanced light sensitivity. It makes sense to use the pixel size because that’s the sharpest the camera will detect. But in practice, unless you print in large format or you digitally zoom into the picture, it is very common to accept multiple pixels as the circle of confusion. With low-end commercial lenses, the effects of chromatic aberration far supersede the focus effect at the pixel level anyways. But bear in mind that if that is the case, your 35Mpx image might really be worth only 5Mpx or so. It is also generally undesirable to have only part of the image “Mathematically sharp” in a PIV experiment, since peak locking would happen only at a stripe of the image.

LoremZoom.jpg
Gradual focus loss when the object plane is inclined in relation to the camera plane.

 

The Scheimpflug Principle

Well, this is the theory of sharpness, but how does the Scheimpflug principle help? Well, the next animation below attempts to show that. If you tilt the lens, the circles of confusion slowly grow to the same size, which means there would be a focal plane where they are the same exact size. I “cheated” a bit by changing the camera sensor size in the end, but in practice it is the camera that would be moving, not the object plane. This demo hopefully shows that there is a possible lens tilt angle that will bring everything in focus.

03 - Scheimpflug
Tilting the lens brings focus back to a plane parallel to the camera sensor plane.

 

The Hinge Rule

Though I think much deeper explanations are available on the Internet (like on Wikipedia), I personally found that playing with the optical simulation makes more sense intuitively. Now we can try to understand what the Scheimpflug Hinge Rule is all about from a geometrical optics perspective.

The animation below defines two physical planes: The Lens Plane [LP], where the (thin) lens line lies; and the Sensor Plane [SP], where the camera sensor is placed. These planes, if the lens is tilted, will meet at a line (or a point, in the figure). This is the “hinge line”. The hinge line is important because it defines where the Focus Plane [FP] is guaranteed to go through. The hinge rule, however, would still be underdefined with only these planes.

The third reference line needed is defined by the Plane Parallel to Sensor at Lens Center [PSLC] and the Lens Front Focal Plane [LFFP]. The two lines are guaranteed to be parallel, and they define a plane – the Focus Plane [FP], where the point light sources are guaranteed to be in focus. A full proof of the Hinge Rule is readily available in Wikipedia and is not absolutely straightforward, so for our purposes it suffices to say that it works.

Hinge Rule.gif
Planes of interest in the classical 2D Scheimpflug adaptor

 

Lens Hinge vs Scheimpflug Hinge

Another confusing concept when setting up a Scheimpflug system is the fact that the Scheimpflug adaptor also usually possesses a hinge where it swivels about. That hinge line (the Lens Hinge) is not to be confused with the Scheimpflug Principle Hinge, explained before. But it does interfere when setting up a camera system because the Lens Hinge is the axis the lens is actually pivoting about, so it ends up changing the focal plane angle, where the camera is looking at, as well as the actual location of the focal plane. So I set up this little interactive Flash simulation here that determines the location of the plane of focus and allows you to understand the swivel movements I’m talking about. Here’s the link: http://www.fastswf.com/bHISKZA. There’s a little jitter for Scheimpflug angles close to zero due to “loss of significance” when performing the calculations, but it should be understandable.

Since most browsers aren’t very fond of letting Flash code run, you can also see a video of me focusing on an object plane (blue) below. In the animation, the camera/lens assembly swivels around the CH (Camera Hinge) axis and the lens swivels around the LH (Lens Hinge) axis. The Scheimflug Hinge (SH) is only used when performing the focusing movement of the camera. The focus optimization algorithm, however, is somewhat straightforward for a 2D (1 degree of freedom – 1 DOF) setup:

  1. Look at the object plane: Swivel the camera hinge CH until the camera looks at the object.
  2. Adjust lens focus: Turn the lens focus ring (effectively moving the lens back and forth) until at least some of the object is in focus.
  3. Change the Scheimpflug adaptor: Increase/decrease the Scheimpflug angle by some (arbitrary) value. This will make the camera look away from the object plane.

Repeat the three steps as much as you need and you should converge to a good focus as shown in the video. Sometimes I skip a step because it is unnecessary (i.e. the object is already partially in focus).

 

And here are the effects of the individual movements when using the Scheimpflug adaptor:

 

But Where’s the Lens Plane?

This one threw me off for a while, so I expect not everyone would be familiar with this. Let’s say you’re trying to design a Scheimpflug system and you are using regular camera lenses (i.e., a Nikon/Canon lens). These lenses contain multiple elements, so it is not straightforward what is the definition of “focal length” that the lens is rated for, and most importantly, where this “effective lens” lies in physical space.

This reference and many others provide formulas for finding the effective focal length (EFL) or multiple lens arrangements. If the link dies, here’s the equation for a two-lens arrangement:

efl

The effective focal length depends on the two lenses focal distance (f1 and f2) as well as in the distance between the two lenses (d). But most importantly, you can swap f1 and f2 (say, if you flipped the lens arrangement) and the EFL will remain the same. This is usually the case in multiple lens arrangements, and this is why most DSLR lenses will be rated for a single focal length, which is their effective focal length.

The EFL can be regarded as a means to replace the complex lens arrangement with a single thin lens. But where is that “effective lens” in physical space? Well, that is a rather difficult question because most lenses will still have an adjustment ring for their focal distance. So, let’s start with a lens focusing at infinity.

Focusing at infinity is the same as assuming parallel rays are incoming to the lens. This means these parallel rays will form a sharp point exactly at the lens focal point (by definition). Well, if a compound lens is set to focus at infinity (most lenses will have an adjustment where you can focus at infinity) then that point must lie on the camera sensor. Therefore, this thin lens must be exactly its focal distance from the image sensor of the camera. If now we know the camera’s Flange Focal Distance (FFD), then we know exactly where this “Effective Lens” is sitting at with respect to the camera flange, as shown in the drawing below. For example, this FFD is 46.5mm in a Nikon camera. A comprehensive list for many cameras is found here. Also, as a bonus, the Phantom v2012 high speed camera has FFD=45.8mm when using the factory standard Nikon F-mount adaptor flange.

EFL
Effective Focal Length of a DSLR lens and its relation to the flange focal distance

If now we change the focus ring of our 50mm lens to focus, say, at 500 mm distance. Then we can use the thin lens formula:

Lens

And find that for o=500 mm and f=50 mm we get i=55.5 mm. Therefore, the thin lens moved 5.5 mm away from the sensor to focus at 500 mm instead of infinity. If you look carefully, a lens will move farther from the sensor as we bring the focus closer:

 

Good. So this means that if we want to do some fancier photography techniques (like using the Scheimpflug principle), we can now use the EFL and its relationship to the FFD to calculate our Scheimpflug adaptor and the Scheimpflug angle needed to focus at a particular feature. Remember, in most practical setups the Scheimpflug adaptor will act as a spacer, thus preventing the lens from focusing at infinity. The more space added, the closer this “far limit” gets and the harder it becomes to work with subjects placed far from the camera.

 

Scheimpflug Principle in 3D [2-DOF]

So this was all under the 2D assumption, where we only need to tilt the lens in order to get the plane in focus. Easy enough for explanations, but you don’t really find that case very often in practice. If the object plane is tilted in the other direction (in 3D) we’ll need to compensate for that angle, too. That can be done by “swinging” the lens tilt axis. In a tilt-swing adaptor, there are two degrees of freedom for the lens angle. The “tilt” degree of freedom allows the lens to tilt as previously described. The “swing” degree of freedom swivels the lens around the camera, changing the orientation of the focal plane with respect to the camera. A little stop-motion animation, below, shows how these two angles change the orientation of the lens on the camera:

Or, if you’re a fan of David Ghetta, you might be more inclined to like the following animation (use headphones for this one):

 

When doing it in practice, however, it is rather difficult to deal with the two degrees of freedom. In my experience, the causes for confusion are:

  • The object plane is static, and the camera is moving, but the movement is done with the lens first – this messes a little bit with the brain!
  • When you tilt the lens, you need to move the camera back to see the subject because now the lens is pointing away from the object plane;
  • It is rather hard to know if it is the tilt angle or the swing angle that needs adjustment in a fully 3D setup
  • It is hard to know if you overshot the tilt angle when the swing angle is wrong, but it’s also difficult to pinpoint which one is wrong.

This compounds to endless and painful hours (yes, hours) of adjustment in an experimental apparatus – especially if you’re not sure of what exactly you’re looking for. Different than most professional photographers, it is usual in Particle Image Velocimetry to have rather shallow depth of field because we want to zoom a lot (like, using a telephoto 180mm lens to look at something 500mm from the camera) and we need very small f numbers to have enough light to see anything. Usual DoF’s are less than 5mm and the camera angle is usually very large (at least 30º). But enough of the rant. Let’s get to the solution:

First we need to realize that most Scheimpflug adaptors have orthogonal tilt / swing angle adjustments. In other words, the tilt and swing angles define a spherical coordinate system uniquely. This means there is only one solution to the Scheimpflug problem that will place the plane of focus in the desired location. With that said, it would be great if the solution for one of the angles (i.e., the swing angle) could be found independently of the other angle, because that would reduce the problem to the 2D problem described before.

The good news are that, in most setups, that can be done. To find the correct location of the swing angle:

  1. Get the normal vector of the target in-focus plane;
  2. Get the normal vector of the camera sensor;
  3. These two vectors form a plane. This is the “tilt plane”.
  4. We need the lens to tilt in this plane. To do so, the lens tilt axis needs to be normal to the “tilt plane”.
  5. Adjust the Scheimpflug swing such that the lens swivel axis is perpendicular to the “tilt plane”. That will be a “first guess” to the Scheimpflug swing. A solution is expected now, as you adjust the lens tilt. Or something very close to a solution, at least.

In practice there’s another complication related to the camera tripod swivel angle. If the axis the tripod is swiveling is not coincident with the axis of the “tilt plane”, then the problem is not 2D. That can be solved in most cases by aligning the camera again. But if that is not possible, usually it will require a few extra iterations on the “swing angle”, also.

Well, these definitions might be a little fuzzy in text. I prepared a little video where I go through this process in 2D [1-DOF] and 3D [2-DOF]. The video is available below.

 

Concluding Remarks

Well, I hope these notes help you better understand the Scheimpflug adaptor and be more effective when doing adjustments in your photography endeavors. In practice it is almost an “art” to adjust these adaptors, so I think an algorithmic procedure always helps speeding up things. Especially because these devices are mostly a tool for a greater purpose, so we are not really willing to spend too much time on them anyways.

Have fun!

 

Jet actuator arrays, turning microjets into MIDIjets

So I’m currently working on this research problem: Microjets in cross flow for disturbance-based flow control. Jets in crossflow have some promise to be a viable flow control technique in aerodynamic applications, but it’s still in its early-mid research stages, where the technology has good theoretical support (i.e. it should work) and some experimental successes (it does work given several lab constraints, in very simple problems). Part of my thesis work will be to further the experimental support side of things.

jet-in-crossflow-vortices_w640
Structure of a generic jet in crossflow (Source: Coussement et. al, 2012)

But when working with complex curved shapes (like any realistic aerodynamic surface) it is not clear where we should place a jet in the surface. Where is separation going to happen? Where to place the jets to prevent it from happening / make it happen earlier? Maybe we want to excite boundary layer waves, like the Tollmien-Schlichting waves? From the computational/theoretical standpoint, there’s some heavy-duty stability analysis that could potentially give possible “sensitive” locations for the jets. I’m not fully a computational person myself, but my current opinion given what I’ve seen so far is that we have too many assumptions we need to trust are approximate enough (i.e., Navier-Stokes are linear equations, jets produce content in the unstable eigenmodes of the flow,  we did resolve the relevant flow structures with the simulation results, cows are spherical, etc.). Again, this is not my specialty, so that’s probably why I find it hard to believe in the effectiveness of that approach.

But from the experimental (wind tunnel) standpoint, we need to drill a physical hole in the aerodynamic surface and route a pipe from inside to blow the jet. That requires some work, but more importantly it takes precious testing time when you’re testing your jet configurations. If all you were able to come up with were ineffective or mildly effective actuator patterns, that’s what you’re stuck with. And you’ll never know how close you got because you can only afford a few data points in the experiment. Furthermore, the background fluid dynamics knowledge required to come up with effective patterns requires decades of study and experience – which I don’t have. So I suggested: Why don’t we manufacture a reconfigurable actuator array and let a computer run thousands of pattern configurations? We could potentially abstract the jet placement problem from the fluid mechanics realm into a (rather complex, I admit) optimization problem. More jet configurations will be explored, increasing the confidence on the solutions found. And with the beauty of advanced flow diagnostics, we can even learn new physics from these solutions.

But then you might ask: Why don’t you just do CFD on these jets? Well, turns out that in order to perform any simulation work with jets in cross flow we need an obscene amount of resolution, which increases the computational time to an extent it is just easier to do the experiment. When it involves multiple jet configurations, you really need to be able to discard multiple runs, which requires them to be cheap. It’s a similar thing with AI. AI is only possible now because now each iteration is cheap to run, even though the math and the theoretical foundations come from several decades ago.

So this is the road I’m going through now, basically making microjet actuator studies cheap to run so we can discard most of them and try random stuff until we hit the jackpot. But, even though the prospect of having to build a reconfigurable array of jets with hundreds of jets may sound like a rather daunting task, there’s some fun along the path. And this is the point of this post!

IMG_20190710_002143
Solenoid array under construction (62 solenoids shown here)

I’m building a manifold with 100 solenoids that can be individually controlled by a reconfigurable signal generator I designed and built (pictures below!). The signal generator board is based on a PIC32MZ (Design here) and has effectively 108 channels. I was able to update all channels simultaneously at 24900 Samples/s (well, there’s a 200ns delay between physical uC ports, but that’s virtually instantaneous from the mechanical standpoint). I designed it such that the board appears as a USB serial COM port in your computer, which then can receive messages through either a serial terminal or a serial interface on Matlab or C++, for example. This gives me a lot of control over the jets.

While putting all of this together and seeing the results of the system I built, I figured: Hey, I can turn this into a musical instrument! Of course, a rather crude one, because my bandwidth is very low (like <200Hz). But I decided anyways to code up a MIDI driver for this jet array and then change the notes to fit the bandwidth by shifting a few octaves on the song. The result is rather crude but it was so much fun to play with! MIDI files, for the uninitiated, are like a digital version of a sheet music. It contains a table of notes and the timing when they should be played and how long for. My job was to simply convert the digital instructions into the protocol I came up with for my serial communication and stream the instructions to the USB serial port.

So here’s a few songs I was able to play to a level where I believe people can actually reconize them: See if you can! (Answers in the description of the video). If you want to have more info on how I did it, perhaps you might consider following my research on ResearchGate and maybe a few years from now an academic paper on this topic will come from that! =)

 

 

 

Finding Vortex Cores from PIV fields with Gamma 1

Vortex core tracking is a rather niche task in fluid mechanics that is somewhat daunting for the uninitiated in data analysis. The Matlab implementation by Sebastian Endrikat (thanks!), which can be found here, inspired me to dive a little deeper. His implementation is based on the paper “Combining PIV, POD and vortex identification algorithms for the study of unsteady turbulent swirling flows” by Laurent Graftieaux, which was probably one of the first to perform vortex tracking from realistic PIV fields. The challenge is that when PIV is used, noise is introduced in the velocity fields due to the uncertainties related to the cross-correlation algorithm that tracks the particles. This noise, added to the fine-scale turbulence inherent to any realistic flow field encountered in experiments, makes vortex tracking through derivative-based techniques (such as λ2, Q criterion and vorticity) pretty much impossible.

 

Computational results are less prone to this effect of the noise and usually are tamer in regards to vortex tracking, though fine-scale turbulence can also be a problem. The three-dimensionality of flow fields doesn’t help. But many relevant flow fields can be “deemed” vortex dominated, where an obvious vortex core is present in the mean. Wingtip vortices are a great example of these vortex-dominated flow fields, though there are many other examples in research from pretty much any lift-generating surface.

 

As part of my PhD research I’m performing high speed PIV (Particle Image Velocimetry) on the wake of a cylinder with a slanted back (maybe a post later about that?). This geometry has a flow field that shares similarities with military cargo aircraft, but is far enough from the application to be used in publicly-available academic research. The cool part is that it forms a vortex pair, which is known to “wander”. The beauty of having bleeding-edge research equipment is that we can visualize these vortices experimentally in a wind tunnel. But how to turn that into actual data and understanding?

 

That’s where the Gamma 1 tracking comes into play. Gamma 1 is great because it’s an integral quantity. It is also very simple to describe and understand: If I have a vector field and I’m at the vortex core, I can define a vector from me to any point in this vector field (this vector is called by Graftieaux, “PM“). The angle between this vector and the velocity vector at that arbitrary point would be exactly 90º if the vortex was ideal and I was at the vortex core. Otherwise, it would be another angle. So if I just look at many vectors around me I just need to find the mean of the sine of these two vectors. This quantity should peak at the vortex core. That’s Gamma 1, brilliant!

Sebastian Endrikat did a pretty good job at implementing Graftieaux’s results, and I used his code a lot. But since each run I have has at least 5000 velocity fields, his code was taking waaaay too long. Each field would take 4.5 seconds to parse in a pretty decent machine! So I decided to look back at the math. And I realized that the same task can be accomplished by two convolutions after some juggling. A write-up of that is below:

PDF File with the math

 

The result, though, is really impressive. Now each field takes 5 milliseconds (3 orders of magnitude better!) to parse in the same machine. So good I made a video of the vortex core. Here it is:

 

 

I’m really thankful amazing people like Graftieaux and Endrikat are in the academic community publishing this stuff. Standing over the shoulders of giants!

The floating light bulb: Theory vs Practice

Yes – You can go to Amazon.com today and buy one of these gimmicky toys that float a magnet in the air. Some of which will even float a circuit that can light an LED and become a floating light bulb. A floating light bulb that powers on with wireless energy? What a time to live!

A quote from Arthur C. Clarke, who wrote “The Sentinel” (which later on became the basis for the science fiction movie “2001: A Space Odyssey”), goes along the lines:

“Any sufficiently advanced technology is indistinguishable from magic.”

This is what led me to the Engineering path. Because, if the advanced technology is indistinguishable from magic, who creates the technology is a real-life wizard. Who creates the technology? The engineers and scientists all around this world. So let me complement his quote with my own thoughts:

“Any sufficiently advanced technology is indistinguishable from magic. Therefore, engineers and scientists are the true real-life wizards.

Of course, if I’m writing about it is because I went through the engineering exercise. And boy, I thought it was an “easy” project. You see these projects of floating stuff around the internet, but nobody speaks about what goes wrong. So here we’ll explore why people spend so much time tweaking their setup and what are the traps along the way.

But first, some results to motivate you to read further:

 

Prof. Christian Hubicki was kind enough to let me pursue this as a graduate course project in Advanced Control Systems class at FSU, so I ended up with a “project report” on it. It is in the link below:

Full 11-page report with all the data

But if you don’t want to read all of that, here’s a list of practical traps I learned during this project:

  1. DON’T try fancy control techniques if you don’t have fast and accurate hardware. This project WILL require you more than a 10-bit ADC and more than 3-5 kS/s. The dynamics are very fast because the solenoid is fast. And you want a fast solenoid to be able to control the levitating object! Unless you can have a large solenoid inductance and a rise time in the order of ~100ms, there’s no way an Arduino implementation can control this. I’m think a nice real-time DAQ controller (like the ones offered by NI) could work here. But an Arduino is just too strapped in specs to cut it! The effects of sampling and digitization are too restrictive. It MIGHT work in some specific configurations, but it is not a general solution (and certainly it didn’t work for me).
  2. Analog circuits are fast – why not use them? Everyone (in the hobby electronics world) thinks Arduino is a silver bullet for everything. Don’t forget an op-amp is 100’s of times faster than a digital circuit!
  3. Bang-Bang! You see many implementations in the web use a hysteresis (or bang-bang) controller. The bang-bang controller is ideal for cheap projects because it deals well with the non-linearities gracefully. But it is not bullet-proof either. It will become unstable even with high bandwidth in some cases if the non-linearity is strong enough.
  4. Temperature Effects: The dynamic characteristics of your solenoid will change as it heats up (you’re dissipating power to turn it on!). So it can get very confusing if you have, say, a PID controller, to tune the gains because the gains will be different depending on the temperature of the coil. Since this effect is very slow (order 10 minutes!) it can result in you chasing your own tail because you’re tuning a plant that is changing with time!
  5. The wireless TX introduces noise! This one is particular to this project: If you’re using a Hall effect sensor to sense the presence of the floating object (by its magnetic field), then your Hall sensor will also measure the solenoid strength! Apart from that, the TX is also generating a high-frequency magnetic field, which will also be measured in the Hall effect sensor signal. The effect of the TX is very small (~2mV) buy it appears in the scope. The problem is that Arduinos don’t have low-pass filtering in their ADC inputs. So anything higher than the sampling rate will appear as an “Aliased” signal, which is very nasty to deal with.
  6. Make sure your solenoid can lift your object and more. This is an obvious one but I think it is easy to overlook because you need to over-design it. I designed my solenoid to lift 100 grams of weight. But in the end, I could only work with 35 grams because the controller needed a lot of space to work. So overdesign is really crucial here. I ended up shaving a lot of mass from the floating object because I couldn’t lift the original design’s mass!

I’d like to put a more complete tutorial on making this, but since I already invested a lot of time in putting the report together, I think if you put some time on reading it and the conclusions from the measurements/simulations you will be able to reproduce this design or adapt the concepts to your design. Let me know if you think this was useful or maybe if you need any help!

Sub-microsecond Schlieren photography

(Edit: My entry on the Gallery of Fluid Motion using this technique is online!)

For the ones not introduced to the art of Schlieren photography, I can assure you it was incredibly eye-opening and fascinating to me when I learned that we can see thin air with just a few lenses (or even just one mirror as Josh The Engineer demonstrated here on a hobby setup).

For the initiated in the technique, its uses are obvious in the art and engineering of bleeding-edge aerodynamic technology. Supersonic flows are the favorites here, because the presence of shock waves that make for beautiful crisp images and help us understand and describe many kinds of fluid dynamics phenomena.

FreeJet_CD_NPR_4.4-036.png
Schlieren image of a 2mm supersonic microjet taken at Florida State University FCAAP laboratory. Illumination time is 500 nanoseconds, taken with a Nikon D90 DSLR to demonstrate the potential for hobby applications. Note the crispiness of the image – the flow was effectively frozen.

What I’m going to describe in this article is a very simple circuit published by Christian Willert here but that most likely is paywalled and might have too much formalism for someone who is just looking for some answers. Since the circuit and the electrical engineering is pretty basic, I felt I (with my hobby-level electronics knowledge) could give it a go and I think you also should. I am also publishing my EasyEDA project if you want to make your boards (Yes, EasyEDA).

But first, let’s address the elephant in the room: Why should you care? Well, if you ever tinkered with a Schlieren/shadowgraph apparatus – for scientific, engineering or artistic purposes -you might be interested in taking sharper pictures. Obtaining sharper pictures of moving stuff works exactly like in regular photography. They can be achieved by reducing the aperture of the lens, by reducing the exposure time or by using a flash. The latter is when a pulsed light source really shines (pun intended!). The great part here is that the first two options involve reducing the amount of light – whereas the last option doesn’t (necessarily).

The not-so-great part is that camera sensors are “integrators”. This means they measure the amount of photons that happened to be absorbed given an amount of time. Therefore, what really matters is the total amount of photons you sent to the camera. Of course, if you sent an insanely large amount of photons in a very short instant, you would risk burning the camera sensor – but if you’re using an LED (as we are going to here), your LED will be long gone before that happens.

So the secret for high speed photography is to have insanely large amounts of light dispensed at once. That would guarantee everything will be as sharp as your optics allow. Since we don’t live in the world of mathematical idealizations, we cannot deliver anything “instantly”, and therefore we have to live with some finite amount of time. Brief enough is relative and depends on what you want to observe. For example, if you’re taking a selfie in a party, probably tens of milliseconds is brief enough to get sharp images. For taking a picture of a tennis player doing a high speed serve, you’re probably fine with tens or hundreds of microseconds. The technical challenges begin to appear when you’re taking pictures of really fast stuff (like supersonic planes) or at larger magnifications. The picture of the jet above is challenging in both ways: its magnification level is 0.7x (meaning the physical object is as projected in the sensor at 0.7x scale) and its speed is roughly 500 meters per second. In other words, the movement of the object (the Schlieren object) is happening at roughly 63.6 million px/second, which requires a really fast shutter to have any hopes to “freeze the flow”. If you’re fond to making simple multiplications in your calculator, the equation is very simple:

D=\frac{M*v}{s_{px}}

Where D is the object displacement in px/second, v is its velocity in physical units (i.e. m/s), M is the magnification achieved in the setup and s_{px} is the physical pixel size of your camera (i.e. s_{px}=5.5 \mu m for a Nikon D90).

I know, I know. These are very specialized applications. But who knows which kinds of high speed photography is happening right now in someone’s garage, right? The point is – getting a light source that is fast enough is very challenging. Some options, such as laser-pulsed plasma light sources, can get really expensive even if you make them yourself. But LEDs are a very well-established, reliable technology that has an incredibly fast rise time. And they can get very bright, too (well… kinda).

So what Willert and his coauthors did was very simple: Let’s overdrive a bright LED with 20 times its design current and hope they don’t explode. Spoiler alert: Some LEDs didn’t survive this intellectual journey. But they mapped the safe operational regions for overdriven LEDs of many different manufacturers. To name a few: Luminus Phlatlight CBT-120Luminus Phlatlight CBT-140Phillips LXHL-PM02, among others. These are raw LEDs, no driver included, rated for ~3.6-4V, and are incredibly expensive for an LED. The price ranges from $100 to $150, and they are usually employed in automotive applications. The powerful flash is, however, blinding. And if they do burn out, it can be harmful for the hobbyist’s pockets.

Circuit.png
LED driver power section.

The driver circuit (which is available here) is very simple: An IRF3805 N-channel power MOSFET just connects the LED to a 24V power supply. Remembering the LED is rated for 4V – so it’s going to get a tiny bit brighter (sarcasm). Jokes apart, the LED (CBT-140) is rated for 28A continuous with very efficient heatsinking, which means we will definitely be overdriven. By how much we can measure with R2. Hooking a scope between Q1 and R2 is not harmful to the scope and allows to measure the current going through the LED (unless the currents exceed ~600A, then the voltage spike when the MOSFET turns off might be on the few tens of volts). We don’t want to operate at these currents anyways, because the LED will end up as in the figure below. There’s a trim pot (R3) that controls the MOSFET gate voltage, make sure pin 2 of U1 is giving a low voltage when tuning.

LEDburn.jpg
A sacrifice for science.

What is really happening is that C1 and C2 (C2 is optional) are being charged by the 24V power supply when the MOSFET is off. Then they discharge at the LED when the MOSFET is activated. No power supply will be able to push 200A continuously through an LED, so if the transistor turns on for too long, the power supply voltage will drop and the power supply will reset. Actually, this is one of the ways to tell if you melted the MOSFET (which happened to me once). The MOSFET needs to turn on in nanoseconds, which will require a decent amount of current (like 4-5 amps) just to charge the gate up. This means we need a driver IC – which in this case I’m using a UCC27424. Make sure to have as little resistance between the driver and the gate to minimize the time constant. The 1.5 Ohms is very close to giving 4A to the MOSFET. Since the gate capacitance is around 8nF, the MOSFET gate rise time is somewhat slow (12 ns).

Speaking about time constants, during the design I realized the time constants of the capacitor that discharges into the LED and the parasitic inductances in the path between the components will dictate the rise time of the circuit, at least for the most part. In my circuit, the time constant was measured to be 100ns, directly with a photodiode. This means we can do >1MHz photography, which is pretty amazing! Unfortunately the cameras that are capable of 1 million frames per second aren’t really accessible to mortals (except when said mortals work in a laboratory that happens to have them!).

Well, the LED driver circuit is still in development – which means I’ll keep this post updated every now and then. But for now, it’s working well enough. The BOM cost is not too intimidating (~$60 at Digikey without the LED. Add the LED and we should be at ~$200), so a hobbyist can really justify this investment if it means an equivalent amount of hours of fun! Furthermore, this circuit implements a microcontroller that monitors and displays the LED and driver’s temperature. It features an auto shut-off, which disables the MOSFET driver if the temperature exceeds an operational threshold. The thermal limits are still to be evaluated, though.

Circuit.png
Circuit board and a (crude) 3D printed case for the LED.

For now, I did my own independent tests,  and the results are very promising.  Below I’m showing a test rig to evaluate the illumination rise and fall times of the LED. The photodiode is a Thorlabs (forgot the model) that has a 1ns rise time if attached to a 50 ohm load. It’s internally biased, which is nice when you want to do a quick test.

TestsLED.png
Test rig for photodiode illumination response measurements

The results from the illumination standpoint are rather promising. Below a series of scope traces show that the LED lights up in a very short time and reaches a pretty much constant on state. The decay time, however, seems to be controlled by a phosphorescence mechanism that is probably because this is a white LED. Nevertheless, the pulses are remarkably brief.

100ns300ns

1000ns
Scope screen for LED illumination (blue curve) as seen by the photodiode. Yellow curve is current as measured by a 10mOhm resistor in the MOSFET source. Curves from 100ns, 300ns and 1000ns input pulse width, respectively.

The good thing about having high speed cameras is that now we’re ready to roll some experiments. By far, my favorite one is shown below. I was able to use the Schlieren setup to observe ultrasonic acoustic waves at 80kHz , produced by a micro-impinging jet (the jet is 2mm in diameter). The jet is supersonic, its velocity is estimated to be 400 m/s. Just to make sure you get what is in the video: The gray rectangle above is the nozzle. The shiny white line at the bottom is the impingement surface. The jet is impinging downwards, at the center of the image. The acoustic waves are the vertically traveling lines of bright and dark pixels. I was literally able to see sound! How cool is that?

 

 

Just as a final note. You might be discouraged to know that I am one of these mortals that happen to have access to a high-speed camera. But bear in mind, these pictures could have been taken with a regular DSLR. The only difference is that the frame sequence wouldn’t look continuous, because the DSLR frame rate is not synchronized with the phenomenon. Apart from that, everything else would be the same. You should give it a try! If you do, please let me know =)

My POV Display V1.0 – Wireless Powered

In a previous post I spoke about wireless power transfer and some engineering I was trying to do with it. This project proved to be very effective, and I’m quite happy with the results I got with it. I’m quite certain someone already had this idea, but the hobbyist endeavor is really useful anyways! And we all know the world is way too big, so sometimes it’s better not to bother to be innovative and just enjoy the build. Anyways, I thought maybe I could give my two cents to the DIY hackers community!

I wrote at Instructables (here!) a simplified guide to making this display. There’s no reason to cover it again in the blog, so I’ll just dive further here for those who want to play along at home (as Dave Jones from EEVBlog would say…). I’ll nevertheless talk about the build because that would be a shame not to be in this blog anyways:

1. The Design

So, as any other design, I begun with the question: What can I do with the process I have at my disposal and the budget I have? I knew a guy that had a laser cutter (he would charge me anyways, but at least I knew him). Also, I inevitably would have to lathe at least one part, because I don’t own a 3d printer and I need to connect my motor shaft to my display. So I set the budget to be ~100 USD (plus whatever I already had) and I begun with the overall characteristics:

“How many pixels I want?” – Well, I contacted a PCB supplier that told me he would cut 200x200mm PCBs no problem. I wanted to have the LEDs directly in the PCB, so the restriction was set: Radius of ~85mm -> Angle of ~150 degrees (no point in having the south pole as it would be covered). Run the numbers for a 5mm LED and we get ~40 LEDs max. Any electrical restriction? I had some PIC16F877As lying around. They have 4 full 8bit ports, which makes it really useful to easily toggle all LEDs at once in the routines (I’m not that great of a programmer, although I like to use the PIC!). I ended up using 4 ports, but only 6 bits each port. That was a total of 24 LEDs. It’s not great resolution, I know, but it’s a first prototype so bear with me ^^

“Update rate?” – Well, at least 20Hz to be convincing, right? I had an old fan motor lying around (haha!). I didn’t know its speed though. But I have a function generator, so I made a quick stroboscopic tachometer and measured its speed. The tachometer is just an LED and a resistor connected directly to the function gen’s output – Adjusting the frequency until you see the motor freeze. Then make sure that 1/2x the frequency doesn’t freeze the motor. It turned out to spin at 30Hz. Unfortunately, after loading it spun only at ~24.5Hz.

“Can I update these pixels fast enough?” – The individual signal rate would be easy to deal with – 30Hz * ~180px per revolution = 5.4kHz. How many processor cycles do I have to perform my calculations? (20MHz/4)/5.4kHz=926. That seems to be enough. But I still need to nail that update frequency (5.4kHz) quite well, to less than half a pixel, so the effect actually is convincing (otherwise there’ll be too much jitter in the image). That means, 5400±7.5Hz. Or, between 924.6 and 927.2 cycles. plus/minus 1.5 cycle. That’s more challenging. I mean, it can be done, but I think that’s possibly the limit for this PIC frequency. Unfortunately to have a faster clock I need to buy another microcontroller, which I don’t want to. So let’s give it a try =)

“How much power do I need if the display is fully on?” – That’s an easy one: 24 LEDs at 5V, 20mA – 2.4W. On the transmitter side, probably close to 10W given the inefficiency of a homemade device. The IRF630 should deal with this no problem.

Now that I had some idea of what I wanted to do and what would the limitations be, I dove into the PCB design. In my design the PCB spins and it looks like:

PCB

I confess routing the LED connections was quite difficult. I’m glad I chose not to use 52 LEDs now! For more LEDs sure I would need a 3D arrangement, or some sort of serial system. But for a beginner let’s keep it easy.

It was quite straightforward to design the case around this PCB. I chose to cut most pieces out of acrylic, but as I said – one of them had to be lathed. The connection between the motor and the spinning part. The design looks more or less like that:

 

It’s sad but at that time I didn’t harness the power of the Prusa. Thus, I couldn’t print the shaft connection. The entire project would be different had I had a 3d printer by then.

Design in hands, shot some e-mails to quote the parts. the costs were (as of 2017):

Cost

Yes, electronics are expensive in Brazil. And yes, hardware is cheap.

 

2. The Build

As I received the parts I began to build. Apart from hours of building, I didn’t actually have any trouble with it. Slowly building and checking the parts of the circuit were working as supposed, it wasn’t really eventful. The worst thing that happened was one open trail at one LED (which means the PCB manufacturer wasn’t really great), that I had to bridge. Below are some build pictures:

 

 

That was hours of fun, as you might be wondering. I know it’s not the best of its category, but it’s my child!

 

3. The programming

That’s the boring-to-talk about but fun-to-do part where you get to use the device you built and actually bring it to life. Yes, the issue of having a precise timing actually appeared during the programming. Also, it turned out the motor didn’t have a constant speed, but it oscillated between 24 and 25Hz. Fortunately I had a zero datum activated by an IR LED. That helped a lot but didn’t completely solve the jittering of the motor. Below are some “failed” tests while I was programming it:

I’m sure you want to see the finished product working. So there it is:

 

4. Conclusion

Well, as you can see this is more of a picture repository than anything else. If you want to take a closer look at the project, please check my Github page at https://github.com/3dfernando/Wireless-Power-POV-Display. The project files are all stored there.

I’d say the biggest lesson learned from this project is that processor speed is crucial for any display device. To update so many pixels, there’s just not enough time. I built an increased admiration for our high-resolution full HD screens. What an engineering feat!