My name is Damian Hites. Most people call me Dom.
I currently work at Gracenote, a media recognition company. I work in the automotive and embedded group, writing software that ends up in car stereos and other devices.
I graduated from UC Berkeley with a degree in Computer Science.

About Me

Languages currently being used: C, Perl
Languages used in the past: C++, Java, Scheme, Tcl/TK

My projects (UC Berkeley):


- I worked in an undergraduate research group named GamesCrafters dedicated to solving two player perfect information games (tic-tac-toe,checkers,chess). In this group, we developed a system, which at the time was referred to as GAMESMAN. GAMESMAN consisted of: 1. Game modules where each module implemented the rules of a specific game (as well as many variations on each game). 2. The core architecture that consisted of different types of databases, game solvers, error checkers and statistical analysis tools. 3. The front ends. This included an ascii front end just implemented in C for quick development and the graphical front end implemented in tcl/tk. I worked on all aspects of the system. For more information about the group check out their website. If you are familiar with cvs, you can check out their latest product on sourceforge.

My projects (Sprint):


- The first project I worked on at Sprint was converting CMON, a program used to collect network content and flow data, into DCMON or Distributed CMON. The idea was to have a central server that all nodes running the CMON program could report to and take commands from. That way the researchers could control all their CMON boxes from one location and have one place to manage and aggregate the data collected. Ultimately the system that we designed followed a server/client module, specifically: a central server (running apache and mysql), consisting mostly of php scripts and a database, and the CMON nodes consisting of threaded C++ code. I worked on all aspects of the system, starting with the C++ nodes and dealing a lot with the database and the front end coded using php and javascript. I cannot share any of the code we wrote or worked on, but I have a demo of the front end (user name: user, password: password). All the data is faked and certain features are not available since it is not actually hooked up to any real back end. To see our design process check out the wiki we used.


- The second project that I worked on at Sprint was designing and implementing a demo for research being done on in-network compression. The idea of in-network compression is to compress the data between to points in a network so that the rest of the network is unaware and unaffected by the compression. In order to accomplish this, you need to introduce a compression/decompression box at each end of a specific connection. To demonstrate this functionality we had two laptops acting as the compression/decompression boxes. One laptop acted as a router running both dhcp and the compression/decompression software while the other connects as the dhcp client running the compression/decompression software at the other end. We would surf the web using the client laptop and as the data was routed through the first laptop, it would be compressed. The mangled packets would then be forwarded to the second laptop that would uncompress the data and pass it on to the web application. All the coding was done in C++. We used the netfilter library to mangle the packets, the zlib library to compress the data and the fltk library to create a simple front end to view compression statistics that we generated and processed on the fly. Once again, I am unable to share the code, but I have some screenshots of the front end while it was running.


- The final project I worked on was taking PCMD (per call measurement data) and processing the data to estimate a latitude and longitude for each call. Once we generated all the latitudes and longitudes we mapped them in order to view call densities. The process also allowed us to verify the accuracy of some of the measurements provided in the PCMD. All of this data manipulation was done using Perl while the mapping was done in php and javascript using the google maps api. I have a demo of the first iteration of the mapping script. The data provided for the script is faked.

My projects (Gracenote):

Build Sysytem/Process

- One of my first responsibilities at Gracenote was to manage both the configuration creation process and our build scripts. Creating a configuration at that time consisted of a manual process of creating Visual Studio projects and various Makefiles that would take up to 3 hours per configuration. Faced with making 7 configurations I created a script using Perl that allowed us to automate the process. This was eventually taken over by a Sustaining Engineer and freed me up to do other things.


- In my second year at Gracenote I took over ownership of a tool that we called the Cooker. This tool processes exports provided by the database team and stores the metadata from the export in various files in a format that our library knows how to read. Changes to the library in each release regularly required changes to this tool in order to support new features and make performance improvements.

On Demand Lookups (ODL)

- On Demand Lookups, or ODL, was a feature that enhanced all of our recognition products. In order to identify some sort of media, a lookup is required. A lookup consists of either searching a local database or sending a query to Gracenote Service. Local databases have limited data and queries to Service have intermitent connectivity in some devices. ODL allows the system to remember lookups that for some reason failed to produce an adequate response. These aggregated missed lookups can then be processed at a later date to attempt to get a better response. The feature also allowed lookups to be performed through a proxy server in order to facilitate a more efficient use of bandwidth between the device and the proxy server.

My current projects (Gracenote)

Auto 2.0

- We are currently working on completely re-architecting our library. Much of our existing library is legacy code that is becoming increasingly difficult to manage. This project is an attempt to address all the issues that we have observed in maintaining the legacy code so that we may move forward more efficiently.