My POV Display V1.0 – Wireless Powered

In a previous post I spoke about wireless power transfer and some engineering I was trying to do with it. This project proved to be very effective, and I’m quite happy with the results I got with it. I’m quite certain someone already had this idea, but the hobbyist endeavor is really useful anyways! And we all know the world is way too big, so sometimes it’s better not to bother to be innovative and just enjoy the build. Anyways, I thought maybe I could give my two cents to the DIY hackers community!

I wrote at Instructables (here!) a simplified guide to making this display. There’s no reason to cover it again in the blog, so I’ll just dive further here for those who want to play along at home (as Dave Jones from EEVBlog would say…). I’ll nevertheless talk about the build because that would be a shame not to be in this blog anyways:

1. The Design

So, as any other design, I begun with the question: What can I do with the process I have at my disposal and the budget I have? I knew a guy that had a laser cutter (he would charge me anyways, but at least I knew him). Also, I inevitably would have to lathe at least one part, because I don’t own a 3d printer and I need to connect my motor shaft to my display. So I set the budget to be ~100 USD (plus whatever I already had) and I begun with the overall characteristics:

“How many pixels I want?” – Well, I contacted a PCB supplier that told me he would cut 200x200mm PCBs no problem. I wanted to have the LEDs directly in the PCB, so the restriction was set: Radius of ~85mm -> Angle of ~150 degrees (no point in having the south pole as it would be covered). Run the numbers for a 5mm LED and we get ~40 LEDs max. Any electrical restriction? I had some PIC16F877As lying around. They have 4 full 8bit ports, which makes it really useful to easily toggle all LEDs at once in the routines (I’m not that great of a programmer, although I like to use the PIC!). I ended up using 4 ports, but only 6 bits each port. That was a total of 24 LEDs. It’s not great resolution, I know, but it’s a first prototype so bear with me ^^

“Update rate?” – Well, at least 20Hz to be convincing, right? I had an old fan motor lying around (haha!). I didn’t know its speed though. But I have a function generator, so I made a quick stroboscopic tachometer and measured its speed. The tachometer is just an LED and a resistor connected directly to the function gen’s output – Adjusting the frequency until you see the motor freeze. Then make sure that 1/2x the frequency doesn’t freeze the motor. It turned out to spin at 30Hz. Unfortunately, after loading it spun only at ~24.5Hz.

“Can I update these pixels fast enough?” – The individual signal rate would be easy to deal with – 30Hz * ~180px per revolution = 5.4kHz. How many processor cycles do I have to perform my calculations? (20MHz/4)/5.4kHz=926. That seems to be enough. But I still need to nail that update frequency (5.4kHz) quite well, to less than half a pixel, so the effect actually is convincing (otherwise there’ll be too much jitter in the image). That means, 5400±7.5Hz. Or, between 924.6 and 927.2 cycles. plus/minus 1.5 cycle. That’s more challenging. I mean, it can be done, but I think that’s possibly the limit for this PIC frequency. Unfortunately to have a faster clock I need to buy another microcontroller, which I don’t want to. So let’s give it a try =)

“How much power do I need if the display is fully on?” – That’s an easy one: 24 LEDs at 5V, 20mA – 2.4W. On the transmitter side, probably close to 10W given the inefficiency of a homemade device. The IRF630 should deal with this no problem.

Now that I had some idea of what I wanted to do and what would the limitations be, I dove into the PCB design. In my design the PCB spins and it looks like:


I confess routing the LED connections was quite difficult. I’m glad I chose not to use 52 LEDs now! For more LEDs sure I would need a 3D arrangement, or some sort of serial system. But for a beginner let’s keep it easy.

It was quite straightforward to design the case around this PCB. I chose to cut most pieces out of acrylic, but as I said – one of them had to be lathed. The connection between the motor and the spinning part. The design looks more or less like that:


It’s sad but at that time I didn’t harness the power of the Prusa. Thus, I couldn’t print the shaft connection. The entire project would be different had I had a 3d printer by then.

Design in hands, shot some e-mails to quote the parts. the costs were (as of 2017):


Yes, electronics are expensive in Brazil. And yes, hardware is cheap.


2. The Build

As I received the parts I began to build. Apart from hours of building, I didn’t actually have any trouble with it. Slowly building and checking the parts of the circuit were working as supposed, it wasn’t really eventful. The worst thing that happened was one open trail at one LED (which means the PCB manufacturer wasn’t really great), that I had to bridge. Below are some build pictures:



That was hours of fun, as you might be wondering. I know it’s not the best of its category, but it’s my child!


3. The programming

That’s the boring-to-talk about but fun-to-do part where you get to use the device you built and actually bring it to life. Yes, the issue of having a precise timing actually appeared during the programming. Also, it turned out the motor didn’t have a constant speed, but it oscillated between 24 and 25Hz. Fortunately I had a zero datum activated by an IR LED. That helped a lot but didn’t completely solve the jittering of the motor. Below are some “failed” tests while I was programming it:

I’m sure you want to see the finished product working. So there it is:


4. Conclusion

Well, as you can see this is more of a picture repository than anything else. If you want to take a closer look at the project, please check my Github page at The project files are all stored there.

I’d say the biggest lesson learned from this project is that processor speed is crucial for any display device. To update so many pixels, there’s just not enough time. I built an increased admiration for our high-resolution full HD screens. What an engineering feat!





Resonant Coupling Efficiency for Wireless Power Transfer

So I’ve been working on a hobby project that involves transferring power to a circuit wirelessly. The distance is quite short (<10mm), as I just need to have one part of the device spinning and the other stationary.

I decided to stick with LC resonant coupling circuits as a solution to transfer this power, as I can admit for some power loss and having low audible noise is a requirement. But while I was studying this circuit, I wanted to know what would be the best conversion efficiency attainable, and also I was interested in understanding why in the first place a resonant circuit is needed (why not just use a regular circuit like a transformer?).

To answer these questions, I was thinking of building the circuit model in a simulator (as in the picture shown below), and simply playing with the parameters should give me a good grasp on what was happening.

Circuit analyzed in this article

While it did help me understand some concepts, it still was not very clear what was the effect of the coupling coefficient (k) on the power dissipated in the load and the overall power transfer efficiency of the system. So I decided to step up a little bit and use Matlab. The simple code I wrote is in this link and you can also play with the parameters for your circuit.

Basically, I used the control systems toolbox to write the transfer function of the circuit. The transfer function I’m interested on, eta, is the power dissipated in the load ZL divided by the power generated in the source Vin. The remaining power should be dissipated in the source itself, through its own impedance ZS.

The plot of frequency response for many coupling factors (as shown below) is rather interesting: It shows there is indeed a dependence of peak power transfer efficiency as the coupling factor increases. But it shows also that you don’t need a very large coupling factor to get interesting efficiencies (for k=0.4, eta peaks at 77%, for example). This means we don’t need astonishingly good coupling (as in a transformer, for example, where k>0.98) to transfer power. I know, this conclusion has already been made by smarter researchers than me long ago, but it’s nice to arrive at it by myself anyways.

Frequency domain response of the power transfer efficiency for ZL=20Ω, ZS=1Ω, C1=C2=100nF, L1=5.5µH, L2-6.0µH.


The following picture shows the same information, but complies the peak efficiencies against the coupling factor k. It shows that below a certain coupling factor (~0.4) the power transfer efficiency plummets. Unfortunately, this means that for large distances the resonant circuit is not the solution (as everyone already knows!), since the coupling factors are absurdly small for these distances.

Even so, for my simple peasant application of transferring power through a 10mm air gap that’s more than enough enlightenment in the topic.

Peak efficiencies for the previous circuit parameters plotted versus k.

Just for the record, I measured the inductances of my homemade pair of coils (5.5µH and 6.0µH, as stated in the figure) and the coupling factor against distance (no capacitors, just the coils, using the open circuit voltage method, f=50 to 800 kHz) and the results are as shown below. According to the charts I generated, then, I should be fine, right? Well, it will depend whether I can make a MOSFET driver that will be able to attain a nice efficiency factor to multiply these maximum eta values I obtained!

Homemade coils and the coupling factor measurement results. The distance d is the gap between coils (I used a known thickness wood spacer)





A freezing tunnel (VRT) thermal model

As many refrigeration engineers around the world, I’ve always been curious to understand better how this machinery behaves transiently. For the ones who do not know the Variable Retention Tunnel, take a brief stop at Google before reading on, so you can appreciate better what I’m talking about!

In practice, we see that when the production rate increases, the heat load rises and the compressors might struggle to keep the evaporation temperature low. Therefore, the evaporator temperature rises, which makes the air temperature inside the freezing tunnel rise as well. The system is constantly looking for some equilibrium point, where the evaporator temperature fits for the current product mass flow rate.

But as each product cools down and eventually freezes, the heat transfer at its surface slows down because the surface temperature drops. This means some of the product inside the tunnel is not contributing to the heat load as much as the fresh, hot product that just entered the device. And all of this is happening simultaneously, which makes the appreciation and understanding of the process remarkably difficult.

So, recently while working on a project I had the idea: Why don’t I simulate the system so I could grasp better what is actually happening? I began, then, an endeavor towards this objective, and the concepts I could get in the path were quite enlightening.


1. The product comes in waves

Probably you are familiar with the product freezing curve shown below. It tells us the temperature evolution in time for some unfrozen product that is subject to a freezing process. The surface freezes first, then the low temperatures slowly (depending on the product’s thickness) reach the center of the product.

Typical food product freezing curve

Another thing this curve also tells us, although indirectly, is the power the product releases onto the environment air. As q'=h*A*(T_s-T_\infty), and assuming the convection coefficient and the product surface area don’t change, we can assure that the power released (q') by the product is proportional to the red curve in the figure.

The implications of this are quite interesting: As the curve shows, the temperature difference (and therefore, the power) drops by half as much in the first hour. So the product contributes the most to the heat load in the first hour, and then it “locks” the inner heat inside the product, making the heat transfer process much slower. As product is constantly coming in the tunnel during the loading hours, the effect looks like a wave of products, as shown in the video below:

The tunnel in this video has a 12h minimum retention time for any given product. This means during the working hours it is almost always full of “hot” product. When product stops coming, then it gradually stops “heating up” the tunnel and global heat load drops. But how does this translate in the total heat load transient profiles?


2. The daily oscillating heat load profile

Unfortunately, in practice we cannot measure the total heat load except with quite expensive equipment (and even so it would be quite inaccurate). But after simulating a whole tunnel, one can just integrate the data and get the total instantaneous heat load at each instant simulated.

Of course, the heat load profile will depend on the production schedule and on how long and when the compressors are turned off to save energy. In some countries, there is a strong variation on the energy fees depending on the hour of the day and on the season of the year. These variations are placed so the energetic system is not overloaded during the hours most people are home using electricity (normally early in the night), and companies are stimulated to turn off their machines when the fees are higher.

These factors make the heat load profile very rich and detailed. Below there is a picture of the same tunnel’s simulated profile during five days. The profile looks very repetitive, with the same peaks and troughs daily. During region A in the graph, the tunnel is empty and heat load builds up as product enters the tunnel. The heat load drops to zero in region E because the fans and the machine room are turned off between 18:00 and 21:00. Since the production is still ongoing during these times, the heat load peaks to B briefly during the machine room retake. This far exceeds the capacity of the compressors at the setpoint, and the machine room will struggle during this time. As product stops being fed into the system (at 0:00), the heat load stabilizes and, during the off-production hours (from 0:00 to 8:00, it drops until point D in the graph. This cycle runs around every day, and during the weekend break the power drops to zero (remaining only the conduction through insulation and the fan power). The weekly cycle begins again.

Simulated heat load profile of a typical VRT installation in Brazil.


3. The machine room struggles… But sometimes that’s okay

I thought that instead of assuming the machine room was “sized” to properly maintain the evaporator pressure at all times, I’d rather have it to “struggle” maintaining it at the setpoint during these high heat load periods of the day. “How would this affect the product?”, I asked myself. So let’s examine the chart below:

The tunnel starts the week, again, with virtually no heat load as the product’s temperature is already homogenized with the air temperature for staying so long during the weekend (region A). As product slowly loads the tunnel, the air temperatures (region B) rise but the machine room is still partially loaded and still keeping the evaporator temperature. In region C, the heat load of the product exceeded the machine room heat load, increasing the pressure of the evaporating fluid (and therefore, T_{evap}) even though the compressors are working at full load.

Simulated Evaporator and Air Temperatures for the same VRT installation.

In region D, the compressors are turned off for 3 hours, and the temperature readings are virtually meaningless. After turning on the compressors (region E), the huge heat load hits the machine room hard and makes the evaporator’s pressure rise all the way to a saturation temperature of -26ºC. Then as the heat load slowly settles down to a smaller value, the compressors begin to catch up.

This is a beautiful picture of the freezing tunnel and one that corresponds quite closely to what I saw in my experience. But if the compressors cannot keep the evaporator setpoint temperature, isn’t it a compressor sizing problem?

Well… It’s complicated. On one hand, the compressors normally are sized for a heat load that meets some average or “guessed” instantaneous criterion. Let’s say, for example, that one sizes the machine room capacity to meet the instantaneous heat load of an hour. The data used for this simple calculation is:

\dot{m}=24000 kg/h

\Delta H=275 kJ/kg

ExtraPower (OtherDevices)=257 kW

Safety Factor=f_s=1.15

The total power required for these compressors would, then, be:

P=f_s * (\dot{m} * \Delta H + Extra Power) = 2395 kW

Which corresponds to the magenta dotted line in the heat load chart. As the chart shows, the magenta line is below the peaks each and every day. But does this mean the compressors are undersized? This would mean they should actually be sized to a 9MW capacity, that occurs for a short half-an-hour. Is it worth it?

Well, it turns out, in practice, it’s not. The whole system has quite a lot of inertia. This tunnel stores 300 tons of product, which slows down the reaction time. Therefore, some transients can be accepted. But how far can we go?

Let’s say that another designer “wants” to size the same system for a daily average heat load. The system design heat load now will be reduced to 2395 kW*(21h/24h)=2095 kW. Let’s say we size our compressors for this heat load. This would mean we now are 300 kW (12.5%) shorter in compressor juice than we were before. So, obviously, we’re going to stay for longer in the “undersized” region, like the chart below displays. But what effect does this have in the product temperatures?

New evaporator temperature chart after reducing the compressor capacity by 12.5%

Well, let’s take a look and see what the product does. As different products stays different times in the tunnel (I’ll talk about this), then some products will exit at a lower temperature than others. This means there is some “statistical distribution” of the products, which is displayed in the next chart.

The chart actually shows the integral of statistical distribution, shows as a percentile. Let’s take the point (-30.2 ºC, 40%) in the upper chart as an example of how we read this chart. This point says that if I take a sample of the lowest 40% of the products outlet temperatures, the largest temperature would be -30.2ºC. In other words, 40% of the product is at -30.2ºC or lower.

Effect of a 12.5% reduction in compressor capacity on the product outlet temperatures (center)

The two charts above show that when we reduce the machine room compressor’s capacity, the average outlet temperature at the center of the products will be larger. But we need to compare it with a threshold, which is shown as a black dashed line in the chart (-18ºC). In the upper case (larger compressor), 99.6% of the product meets or exceeds the -18ºC threshold line. In the lower scenario (smaller compressor), a lower portion (97.1%) of the product meets the threshold line. To define whether these numbers are satisfactory or not is a decision the plant operator must make.


4. But I, the designer, shall dictate the capacity of the system!

The takeaway from the previous section is that the heat load of a given system (and this applies to other thermal systems as well) is not something we “decide”, but a product of the operational variables. Of course, we can control most of the operational variables, but we shouldn’t assume that’s always the case.

Another designer might say, for example, he “wants” to size the compressor to the weekly averaged heat load. He argues that the weekend time is idle, compressors are shut off unnecessarily during this time and he ought make the initial investment lower by putting more machines that will operate the whole week, therefore saving millions of dollars! Is that possible? Let’s see…

So the figure below shows how the temperatures behave for this compressor sizing. We can see that the oscillations are wilder, the peaks are roughly 5ºC above the previous simulations, and the machine room saturation temperature is constantly oscillating according to the tunnel heat load. But still at the weekend, the heat load drops and the compressors need to be shut off.

Temperature profile for a 1496 kW compressor (weekly sized).


So no improvement happened here, the compressors are still struggling during the week and being shut off during the weekend. Why? Well, that’s because we, as I previously mentioned, cannot dictate the heat load. The system does that. We just size for it.

So you’re probably curious to see how did the product temperatures perform under these “undersized” conditions. As shown in the figure below, we have now 75.4% of the product meeting the -18ºC center temperature criterion. It’s not remarkably bad, actually – but some clients will not be remarkably happy with some -10ºC measurements in their quality control spreadsheets. That’s the reason why we oversize!

Effect of a 37.5% reduction of compressor capacity in the product temperature statistical distribution.


5. Conclusion

This is sort of a results article. I believe I’ll write some other articles on the mathematical devices I used to put the simulator together. I hope it was enlightening for you, the reader, as much as it was for me. If you think that kind of simulation would be useful for you, feel free to contact me.

Thanks for reading so far!

Design Review – Flying like Iron Man from the Hacksmith

1. Introduction

So I’m a subscriber to the Hacksmith youtube channel, which is a guy who tries to bring superhero devices into the real world. It’s been a while he’s working on the development of the “Flying like Iron Man” project, which is a quite difficult and money-hungry project. Although I’m not quite a fan of the methods that are shown in the channel for the engineering of such a complex device, I also bear in mind that the audience would probably find the engineering details quite boring and unrewarding, so the progress updates are more on the testing side than on the design side.

Anyways, he started a crowdfunding campaign on GoFundMe, which means that this might get serious. Although I can’t contribute due to the fact I’m back in Brazil (and here our money is worth quite a lot less than in those developed countries), I thought it might be interesting to give a review on this project. So sorry, the Hacksmith, but this is all I have to offer… =/

I remembered that I made a while ago a spreadsheet to determine the rotor diameter for a personal one-seater electric helicopter for a senior design project class. As the spreadsheet was ready to go, it wasn’t that much trouble to plug in the numbers for his design. You can download it here, if you want to play with it. It’s not a great spreadsheet, though, and its far from a polished user-friendly status that I normally take my spreadsheets to.


2. The helicopter equation

The jetpack, the helicopter, the drone and the rocket are vehicles that have something in common: They need to stay afloat by pushing a fluid downwards. The airplane stays afloat in the same fashion (by pushing air down), but it does so with its wings by moving forwards.

But what’s the difference? Well, it requires quite a lot of energy to push enough of a light fluid (such as air) to generate any decent lift. You can see, for example, that a water jetpack is much smaller and feasible than an air-hovering one. It has to do with the fact that in order to get thrust you need to pump mass (as momentum is P=m*v ), which results in a much larger device if the fluid being pumped has little density (as is the case for air).

The problem becomes even more important when we account for energy. As anything that stores energy (fuel or batteries) has mass and needs to be carried with the hovering device, there is only so much of it that you can bring with you. The amount of mass will depend on how much time you want to stay hovering in the air. For a helicopter or any hobby drone, it’s desired to stay hovering in the air for at least 20-30 minutes, so one can get from point A to B. The more you want to stay in the air, the more energy you’ll need to bring with you. But then you become heavier, and will need to bring even more energy to stay afloat. How can one assess whether this will result on an endless loop?

Well, there comes the helicopter equations to help us, designers. I selected the set of equations shown below, which are useful for solving a battery-powered hovering device. Since it assumes no relationship between the mass of the engine/thruster and its size/power, it’s really only useful within a limited range of powers and fan diameters. But it will be enough to show the point I want to make:

(1)   L=\dot m*v=m_T*g [Lift equation]

(2)   P=(\dot m *v^2)/(2*\eta) [Power needed to displace the fluid]

(3)   m_T=m_{bat}+m_{str}+m_{person} [Total mass of the hovering device]

(4)   E_{st}=m_{bat}*\rho_e [Stored energy in the battery]

(5)   t_{flight}=E_{st}/P [Total flight time to discharge battery]

(6)  \dot m =\rho_{fluid}*v*(\pi*D^2/4)  [Relationship between mass flow and volumetric flow]

Solving the equations above for D leads to this very interesting relationship:

D=\frac{(m_{bat}+m_{str}+m_{person})^{3/2}}{m_{bat}} * \frac{t_f}{\eta*\rho_e}*\sqrt{\frac{g^3}{\pi*\rho_f}}

Although that equation looks scary, some lessons can be extracted from it:

  • The diameter of the fan is directly proportional to the flight time t_f
  • It’s inversely proportional to the battery energy density \rho_e , which means better batteries will have a direct impact on the fan size
  • The diameter of the fan has this complex relationship with the battery mass m_{bat} . But it basically means the following: If you were to reduce the battery mass down to zero, the diameter of the fan would need to be infinity (which is quite unfeasible). If the battery mass is very large, then the relationship is between linear and quadratic (D\propto  m_{bat}^{3/2} ). If, on the other hand, it’s in the middle, then there is a minimum diameter.

The minimum value of D will be possible only when:


Which is surprisingly simple! This leads to a minimum diameter of:


Unfortunately, normally that’s a lot of batteries. Therefore, the best choice normally is not the minimum diameter. Also, for the case in analysis, it would lead to a supersonic exit speed, which throws all the equations shown here through the window…

3. So, is it possible to fly like Iron Man?

Yes, it indeed is possible to do so. It’s not some crazy stuff, and the Hacksmith data seems to be accurate and adequate. The spreadsheet I linked in the first section is based on his data. So what does the graph of diameter versus battery mass look like for the Hacksmith’s design?

Point of Operation.png
the Hacksmith’s jetpack point of operation

This curve was generated for a device that uses 6 EDFs. It seems that he will be able to hover just a little bit less than 2 minutes, which also corresponds to the claims in his GoFundMe campaign. So it seems legit, although it wouldn’t be very fit for a transportation device. If he wanted more hover time, it’s just a matter of increasing fan size or transporting more batteries. Or both!

For this calculation I considered a fan energy conversion efficiency of 50%, but the fans linked in his campaign page seem to claim an efficiency of 70% (which seems kinda too much for these devices). So there might be some difference between calculations and reality, but that’s exactly what engineering boils down to anyways!


So I hope this helped you to understand the underlying principles with a benchmark case. I hope you guys at the Hacksmith channel actually manage to build it! I’ll definitely stay tuned!


Mass file renaming and relinking in Inventor Part I: The results

*Edit: New, translated, version posted on GitHub with install instructions. Check it out at


1. Introduction

If you’re a mechanical designer for a living, chances are that you already had to copy an existing design. Even higher are the chances that you already had to rename all the files in a given assembly to match an ERP needs or a coding system needs.

Well, there are many ways of doing this, but I found that all the built-in ways in Autodesk Inventor have some sort of caveat:

  • Manual renaming in explorer: This is a pain in the ***. Apart from renaming the files one by one, you have to keep track of the old-new name correlations so when you open the assemblies in Inventor, you can redo the links manually.
  • Excel VBA to rename files: This is a little bit less cumbersome. You relieve the pain of renaming the files one by one, but the relinking still needs to be done manually in Inventor. At least you can print the correlation table from Excel.
  • Using the Design Assistant: This tool is nice, but there is a missing functionality that keeps it from being something usable for large assemblies: One cannot paste a table in the treeview. Imagine you as a designer already having the old-new naming correlations in an Excel table, but unable to paste them in Design Assistant. You still have to do the renaming manually. Sometimes it hurts productivity even more as the files are organized according to the tree, so finding them is another issue.
  • Using the CopyDesign add-in sample from Inventor SDK: This is an example piece of software that comes with the Inventor API SDK. It does what it’s supposed to do: Copies a design and renames the files. The problem is, you need to have the new names beforehand. And new files (which are probably going to happen in a copied design) will have to be named correctly during design, which is not always possible under an ERP system.

2. My solution

So, now that I showed the available options and how they did not meet my needs, let me state what I want to do and what I came up with using the Inventor API.

I want to make a file list using Excel VBA (which will list the file names for the current assembly) and, using some made-up rule, assign new names to the existing files. Here I think Excel will always be way more flexible than any piece of software I would ever write, so I decided to leave the rule creation to the freedom of the user. A table like the one shown below will be the output from Excel as a CSV file and read by this program:

Input CSV file format for the renaming system

I open the main assembly file and import the correlation table on my API-brew software. It figures out what files need to be renamed if they match the name in the “from” column of the correlation table. It finds out whether there are any equally-named *.idw files in the path. It renames all the files and then redoes the links in all levels of the assembly. Finally, it saves the assembly.

That’s the procedure in a nutshell. The heavy work is done by the software. So how does it look in Inventor? Follow the step-by-step screenshots shown below.

2.1. Main assembly loaded in my API interface

***Edit: I changed the button name to “Batch Renaming“.

Screen of the “Batch Renaming” add-in showing the list of components referenced by the assembly

2.2. Loading the CSV file

CSV file being selected in the file browser

2.3. Correspondences loaded in the file being displayed on-screen

In this case, not all files were selected to be renamed as some parts belong to the library.

Files to be renamed are shown in the table in bold

2.4. Renamed files in Explorer

Files after the renaming in the explorer view…
… and in the Inventor treeview!

3. Conclusion

So the use of this tool saved us quite a lot of time on the renaming of files. For larger assemblies, of >700 parts, a reliable manual renaming took up to 3-4 hours and could be reduced to something like 10 minutes. If you think this tool might be useful for you too, please let me know in the comments!

I published it on GitHub in this address: Please check it out and let me know if it was useful for your job!

If you guys understand Portuguese, you might find this video I made in my language for my comrades useful. I’ll soon produce an English version with the updated code so more audience can understand it. **EDIT: there it is =)

The freezing heat exchanger: Part III – Some insight

So in Part I and Part II we spoke about how to solve numerically the freezing heat exchanger. After some tinkering with the results I got, I realized something very simple: The maximum heat transfer rate in a freezing heat exchanger is not dependent on:

  • The ice thickness;
  • The non-freezing fluid heat transfer coefficient;
  • The material/thickness of the tube wall.

Which is quite unintuitive! Basically, the maximum heat transfer rate of a freezing HX happens when it is completely frozen. And then that happens, we know what’s the temperature of the liquid boundary: T_{wall}=T_{freeze} (Of course, assuming no supercooling).

This allows us to analyze a freezing heat exchanger based on the maximum heat transfer rate possible, in a way similar to the ε-NTU method. Let’s call it the ice-efficiency method, and name an ice-efficiency factor \varepsilon_{ice}, defined as:


Where q_{HX} is the effective heat transfer rate of the heat exchanger, while  q_{fully-frozen} is the heat transfer rate of the same heat exchanger, should it be fully frozen. Of course, q_{HX}<q_{fully-frozen}.

Actually, a similar approach has already been suggested as a “dimensionless heat transfer rate variable” q^* by Zerkle (1964) in his PhD thesis, so it’s not something really that innovative. The difference here is that I’m using the maximum heat exchangeable by the HX the size it is, instead of the maximum heat exchangeable by an “infinitely long” HX.

q_{fully-frozen} can be calculated using the ε-NTU method, assuming T_{wall}=T_{freeze}. First, let’s define q_{max}=q_{HX}/\varepsilon:


And the number of transfer units:

NTU=\frac{h_i A}{C_{min}}

An important thing to notice in the NTU equation is the fact that U, the global heat exchanger coefficient, has been replaced by h_i, the internal convection coefficient. This is because given a wall temperature (T_{freeze}), the heat transfer rate is now independent on the other thermal resistances. What this means is that, once fully frozen (which is the best case), the maximum heat transfer possible is only dependent on the liquid convection intensity.

Two takeaways come from this interpretation: The first is that (mild) solidification is desirable in a freezing heat exchanger. The second is that this sets up the operational limit. Of course, different from the ε-NTU method, the way to find \varepsilon_{ice} is not so clear. Maybe we can come up with a correlation for that?

The freezing heat exchanger: Part II – Solution

1. Introduction

So in Part I (link here) we devised a method to possibly solve the freezing heat exchanger problem. Here we’re going to look into the numerical method itself and what are the results we can get with this tool.

2. Solving it numerically

We began by stating the fundamental equations of this 2D flow, which need to be integrated in three dimensions (two space dimensions and one time dimension). By eliminating the need to solve for the flow field, the integrator will only need to solve for one space and one time dimension, greatly simplifying both the integration method needed and the number of computations to be made.

The solution method I devised is explained graphically in the diagram below:

Integrator diagram to solve for the freezing HX

This is the basic skeleton of the integrator. We can add a lot of stuff there to adapt the solution to our needs. For example, in my case I added the dynamic behavior of the shell side, so I could implement a controller for a motorized flow valve. This allowed me to understand the control behavior and how the HX responded to different conditions.

I implemented this method in VBA for MS Excel, and although I know this isn’t the best tool for this job it’s the one we can afford here. It doesn’t actually matter which programming language you use, as the calculations aren’t that intensive.

3. The results

Below I’m showing some sample results of what we can get with this simple methodology. The graphs are shown for a heat exchanger comprised of stainless steel piping, d_e=38.1mm and d_i=35.1mmL=5.0m and 16 tubes in total. It’s arranged in 4 circuits of 4 passes each, and the shell-side fluid is ammonia evaporating at -5ºC. The tube side fluid is water, which is pumped at a rate of 70 m³/h, entering at +2ºC.

From the graphs below, we can see that the ice builds up to a maximum thickness of 0.66mm at the outlet, which stabilizes after ~200 s. This means that in this condition, the heat exchanger will not clog with ice.


Graphs generated by solving the equations presented

So what happens if we reduce the water flow rate to half as much? The results for this condition are presented below. Look at how the ice layer builds up mich thicker now, going towards a thickness of 2.15mm and growing. Also, note in the instantaneous power graph (first graph) how the shell side power (power delivered to the fluid in the shell) is now much different to the tube side power (power delivered to the fluid in the tube). This is exactly due to the ice buildup, which requires heat to be released.

Even in this condition, it is not very likely that the HX will shut off with an ice plug. We can see that the process conditions must be very harsh in order for such a failure to occur.

Graphs for the new condition, with half as much flow rate in the heat exchanger.

So let’s see what happens if, due to a hypothetic process heat load drop, the temperature of the water drops to 1.0ºC instead. The simulation time now was increased to 1000s (~17min) to show the long-term effects. The ice now builds up to 4.5mm thick in the outlet. The entire heat exchanger is now frozen to some thickness. Even so, it seems to be in a stable condition, and if the pump can still move those 35m³/h through the HX, it’ll probably not clog up.

Well, we cannot assure anything about the integrity of the pipe, as the stresses in the pipe wall increase as ice builds up. For this, we need to perform a different calculation.

Yet another graph showing the ice thickness evolution when the water inlet temperature is 1ºC.

As a last graph I wanted to show, take a look at what happens if we simulate this HX, with 70m³/h of water coming in at +2ºC, if we begin with a constant ice thickness of 5mm:

Graphs for the same HX “thawing” with water to the steady-state conditions.

The ice thickness, as one might have expected, decrease to the operational condition. This means that we can simulate both HX freezing and thawing with this method, possibly opening up a way to simulate the behavior of ice banks.

I hope these plots displayed the capabilities of the method and inspired you to make your own version. If you have any doubts, feel free to send me an e-mail or a comment!


Continue to Part III >>