There comes a time when the tools are available to complete a task but the way forward isn’t exactly clear. More exotic tools often lack complete or generalized tutorials, and it falls to the individual to find out how to use them for their own purposes without explicit guidance from others. Of course, it doesn’t have to be this way; so here’s another tutorial for you all on how to set up an ARM cross-compilation environment on Linux.
I program a lot. A lot. For me, a good editor for code is an essential part of my computing toolkit. It has to be fast, understand my primary languages C, Python, and Java, and have a set of features that let me be a more productive programmer such as autocompletion and documentation lookup.
For a long time, I used various IDEs that were designed to provide exactly that. I used NetBeans for most of my editing all through my Freshman and half of my Sophomore years in college. After that, I used Eclipse. My issues with both of these environments were the same: a lack of easy extensibility. I want my editor to be able to integrate well with all of my tools.
For example, both of the IDEs mentioned above were aimed primarily at Java developers and while they have extensions to enable editing Python and C, they’re frequently not very good. PyDev for Eclipse never seemed to work well for me, and using such a heavyweight editor for writing small, light Python seemed like overkill.
That said, they definitely have some nice features: project management, integrated code browsing, GUI designers, mouseover autocomplete and documentation, automatic generation of build rules, and run/compile support all come to mind. It would be nice if I could have an editor that worked faster and still provided a lot of these features. I decided it was time to go searching for a solution.
The solution that I came to was Emacs. Emacs has extensibility at its heart and soul; for anything that you might want to do, Emacs probably has something written that you can use right out of the box. This focus on extensibility is what drew me to it over vi; while it might be a more bloated editor, it hardly matters on my quad-core 8GB development machine. It boots ages faster than Eclipse and is more stable; and with my additions (made easy by the availability of packages and the easy .emacs file), it supports most of the features that I used in Eclipse.
My Emacs setup is currently heavily Python-oriented, as I was working on a Python program at Google when I switched. It includes autocompletion, semantic parsing, project management and refactoring, code browsing, syntax highlighting, and all of the usual Emacs goodies. I’ve tried to keep the number of elisp packages installed through the package manager on Ubuntu to the minimum so that I can keep all of my important extensions in one place. This lets me put the entire directory under version control and replicate my setup across machines easily. I’ve even put the git repository on GitHub if you’re interested!
So far, I’ve integrated YASnippet, the Emacs Code Browser, PyLookup, Rope, Pymacs, auto-complete, ido, linum, and CEDET into a full-featured editing environment. It’s working really well for me so far, and I would highly recommend considering Emacs if you’re looking for something lighter-weight than Eclipse.
Well, it’s been a while since I’ve posted! Probably time for an update, huh?
As some of you may have figured out, I’ve been working for Google since May. It’s been a good experience, and there’s lots of free food! I’m working on the Commerce Systems/Billing team, creating reporting tools that help our engineers provide even better service for customers. The whole experience has been rather enlightening, and I’m very grateful for the opportunity. I have about a month and a half left on my internship, and then I’ll be heading back to CC for my last year of undergrad study!
But, before I get there, I’m planning on adding some new equipment to my arsenal. I’ve made quite a bit of progress on building a RepRap 3D printer, and I’m quite excited to see it print! I have the frame built, the motors moving, the firmware compiling… The only set back being that I inadvertently burned out one of my Pololu stepper drivers. I’ve got two on order, so hopefully that will be remedied sometime in the next week. In the meantime, I plan on finishing the Cartesian Robot portion of the machine, and testing it with the various host software options to see which I like best. Stay tuned for more on that.
In software news, I’ve started really digging into Python. I’ve bought a copy of “Programming Python”, a beast of a book which covers most of the advanced usage I’m interested in. To that end, I’ve started writing some small scripts to help make things easier day to day. For example, I’ve written one that sends me an email when my server needs to have updates applied. Python is a ton of fun, and I’d highly suggest it to anyone wanting to put some play back into their programming.
To make writing Python scripts easier, I started looking for a new editor. Eclipse is great for larger projects with lots of source and provides some really nice features, but it’s heavy and slow and doesn’t lend itself well to the fast development that Python promotes. I decided having a lighter environment would be beneficial, so I turned to Emacs. I’ve been using it for the past 2 weeks at both work and home, and I’m really enjoying it. I think I might spend some time learning Lisp so I can extend it. I also need to get a good functional language under my belt…
So, in any event, my summer has been pretty good. My only complaint is that Ann Arbor is too far away… My girlfriend was accepted to the University of Michigan’s Artificial Intelligence Laboratory as a PhD student, and she’s been there since June working on a robot navigation system. I’m incredibly proud of her and somewhat jealous!
I’ll try to pay a little more attention to the blog now that things have settled into more of a routine; keep an eye on it for updates on the projects!
(P.S. I’m also on Google+, so feel free to drop me in a circle if you feel so inclined!)
When most people think of Computer Science, the first thing that pops into their head is an image of some greasy kid in a black t-shirt and sandals tippity-tappity typing away. While programming is a fundamental tool in the computer scientist’s kit, it’s no more the core of Computer Science than glassware is the core of Chemistry. In truth, the fundamental quality of all rigorous sciences, from the abstract (Mathematics) to the “hard” (Physics, Chemistry) to the “fuzzy” (looking at you, Psychology!) is that they all postulate fundamental truths about their area of study.
In Computer Science, these fundamental truths are based on assertions about the ability of a computational system to be able to solve problems. There are some problems which are inherently unsolvable by computational systems, either due to physical limitations (memory, processing speed, etc) or theoretical boundaries. For example, let’s take three problems which we might want to give our computer to solve:
- We’d like to know how much time a ball dropped from rest takes to fall 3 meters on Earth.
- We’d like to know what the shortest path between the 100 largest cities in Germany is that passes through each city exactly once and begins and ends in the same city.
- We’d like to know, once we have written a program, whether or not it will run for a finite period of time or enter into an infinite loop.
These three problems illustrate three very different classes of computability. The first is in the class commonly referred to as P: Deterministic Polynomial Time problems. These problems execute in an amount of time which has an upper bound given by some polynomial in the number of inputs. In simple terms, this means that the amount of computation required to solve a problem doesn’t grow too quickly for more inputs. These include constant time problems, which can be solved in the same amount of time for any number of inputs (think “find the nth number in an array”); logarithmic time problems (Quicksort); and polynomial time problems (Bubble Sort). This is generally the category we want to be in.
Our first problem is in P because it can be solved in a constant number of steps. Unfortunately, our second problem is significantly more complex: it actually is bounded in its execution time by a function of the form O(CN!) where C is a constant and N is the number of cities. We can solve this problem, but it will take an extremely long time for even moderately sized inputs. For example, the problem as we stated above would take as many as 9 x 10^157 steps to solve for the unique ideal solution. This class of problems is called NP, for Nondeterministic Polynomial Time. These problems are extraordinarily difficult, and for larger input lengths are effectively impossible to solve in a reasonable amount of time. NP is commonly considered a superset of P, meaning that every problem contained in P is also contained in NP, although this is as of yet unproven. In fact, one of the major outstanding problems in Computer Science is to prove this conclusively one way or the other. The problems in NP are important but difficult to compute; if one could find polynomial time algorithms for them it would be a huge advance in our computational abilities!
Finally, the last example demonstrates one of the fundamental weaknesses of current computers: this problem is not solvable for all possible inputs. It joins a class of problems which, despite our best efforts and algorithms, are fundamentally impossible to compute answers for (say, counting the number of atoms in the universe). While our techniques are powerful, there are inherent limits on what we can do with computers, and Computational Complexity Theory aims to find out what they are.
Injection molded and sand cast parts require a high rate of flow and the maintenance of temperatures above the glass transition to fill their molds. If the material cools before the mold is filled, shrink cavities and distortions of the part can occur. The most common way to deal with this is to keep parts small, with thin walls and oriented such that the material doesn’t have far to travel. Might there be a way around this? Could it be possible to heat the walls of the mold to maintain high internal temperatures until the material has filled the mold entirely? I don’t have a foundry to try this myself, but I would love to see someone experiment with heating a sand mold while casting aluminum to see if it reduces or eliminates problems with shrink cavities. Any additional control that we can get over the cooling process can only help, right?