The Mandelbrot Set with CUDA

Back in my high school days, i was into distributed computing. It was the era when computers pretty much consumed the same amount of power even if they were idle. I’ve chosen to dedicate my CPU cycles to SETI@home (yeah i was a big fan of Contact). First through the dedicated installer, later through BOINC. My contribution was pretty average but around 2007 i’ve noticed that some users spiked and started to produce pretty awesome numbers.

It was the year when NVIDIA launched their first CUDA enabled GPU which meant that you could use your card for more than gaming. The CPU is a general purpose processing unit and i’ts really good at doing anything. As you know when you are good at a lot of things you cannot be great at anything. The GPU on the other hand does the same type of mathematical computations over and over again. Practically each pixel in a game is rendered in the same way. So in short the GPU is optimized architecturally to carry out a lot of parallel tasks.

In my Mandelbrot set generator last time you saw that we do the same mathematical calculations for each pixel in the image. Since you know i love performance optimizations, let’s try to use CUDA to offload these intensive computations onto our GPU.

Continue reading

Solving Sudoku

I like to write puzzle solvers, they refreshes my thinking. You probably learned in your algorithm classes that in these cases you can resort to backtracking. While backtracking is surely a solution in many theoretical cases, it can rarely be utilized in real life applications since the performance impact is huge.

Sudoku is one of those kind of puzzles which seem trivial to solve with a backtracking but lets think about it. There are 81 positions, each position can have 9 different values, generating through the space of potential solutions and validating each and every step could take ages.

Let’s instead of backtracking, track the possible values that each cell can have. This is how humans solve it and it will bring us very far in the solution. Here are the data structures i propose.

Continue reading

Let’s care about performance

Performance of applications I write was always something I deeply cared about. I’m talking about using as few resources as possible, but of course not in the detriment of other important things. Teams often neglect this and take action only when there is a perceivable performance issue.

When is the right time to optimize?

I would say it always need to be on your mind, but before you wrap up the project it is always good to tighten the screws and optimize what you can.

Decisions around your data model will have a big impact on performance and it is very hard to change at the end phase of a project. This is the reason why I say, you need to be mindful about performance all the time.

In the last sprints of the project, most of the important flows are already there. You understand the application a lot better then you did in the beginning. Since everything is there this period is a good opportunity to poke around and see how some important flows perform. Also you are in a better position to form and educated opinion whether something needs improvement or not.

Continue reading