Sunday, February 24, 2008

07.REST

Our project does need to provide a RESTful interface because we need to construct the HTTP protocol that will eventually make the ambient device perform an action. Since all we are doing is changing the state of the ambient device by way of a trigger from the hackystat senser database through the use of a hyperlink, all the REST principles and execution can be used to construct that link.

For instance we can give each action and trigger an ID and use the XML to link them together, but there only needs to be one representation of the resource. This representation of the resource is the "action" via a HTTP hyperlink to the ambient device. At this point, the standard methods we would uses is the HTTP protocols like GET and PUT and the communication would be stateless since all data is kept on the hackystat sensor database, and since we are just generating the hyperlink itself and then sending it to the ambient device via the Violet server or Ambient server we can still use the REST architecture to create that link. We also use the web UI to link each trigger to a specific action and can use REST here as well.

Yes it does obey the REST design principles because there is a an ID for every given "thing" that thing being the Sensor Data resource instance. There is also a project/user URI specification which is the what is used to give each "thing" there ID. Everything is done with XML and uses default implementation constraints to link all the data together. It uses the HTTP methods to support access control. This is where the standard methods come into place. All data is return in XML form so this also follows the REST principles of return a resource, although it not multiple representation it does return a resource through a XML representation.

Yes the projectViewer does not seem to obey the REST principles. Upon further research it seems like the projectViewer relies on RPC, which is the Remote Procedure Call. This procedure sends a request to a remote server, or even the local server, and passes along the information that it gets. Then it moves on with the rest of the code. So unlike REST not all of the code is linked together but sections could be located on other servers and remotely activated.

Monday, February 11, 2008

04.Hackystat

I was able to complete the entire assignment and didn't have much difficulty. The only problems I encountered were trying to get through all of those guides. At first, it really seems overwhelming and tedious to install all the sensors. To start, eclipse was simple enough and I had no problems with that. But installing Ant and all the other QA tools left me a little confused. I'll admit that it did take me awhile to get back into the habit of using Ant and that might have contributed to some of that confusion. But what really kept me from moving on was just the sheer volume of data to go through. I didn't know where to start or really where to go next without carefully reading the pages. In the end there wasn't much to do at all other than setting all the environmental variables. All in all it only took me an hour to install all the sensors, sign up for an account, and get everything up and running so it wasn't much of a road block at all.

After having experience using HackyStat I can definitely see this as a rich source of information that can be used with the Nabaztag or the other ambient devices to clearly signal to users and developers of almost any problem that might be plaguing their projects. As for my own development through coding, its great tool to see the progress of your work and to make sure it's done right.

As for the three prime directives, it clearly fulfills all three. It clearly accomplishes a useful task because this system will instantly update the user will all kinds of information. It's also way easier to read then the command line prompt that you get in MS DOS. For the second prime directive, I was able to install all the sensors, sign up for an account and start using all within an hour so it easily fulfills that directive. Lastly, the last prime directive is fulfilled as well because there are more than enough wiki pages that go into detail on how a developer can install the system and get to producing something more for it as well.

It's easy to see that HackyStat fully covers all three prime directives and it also shows me how a "real life" program should. The installation process, although a little confusing, greatly document the entire process. There are pages on how to use every aspect of the program as well as many pages to show how to develop on to it as well.

Friday, February 8, 2008

worse is better.

So I read an article today and I thought wow. Philosophy and Computer Science all rolled into one. Jeff Atwood's blog on worse is better was about Steve Martin book and how Steve Martin talked about not trying to be great, because you will inevitably great for at least one night, but to strive to be consistently good. Then he relates it to how programmers should also strive to be consistently good. This made me think about my programming as well.

It seems that as a undergrad studying computer science, all of my project seems to be my shot at being the best, making the best code that I can possibly write for the best code I can possible get. Now is the whole approach of trying to be great with each project striving to be great once, or trying to be good all the time?

I think that I should not focus so much on trying to write the best code for each project but rather just try to shoot for writing good code consistently. To be honest I really have no idea how to go about this. But it nice to think about for now at the end of the week while waiting for my friend to get out of class so that we can go lift.

Tomorrow I'll be back at work with my 414 project but I'll try to remember to consistently write good code. Whatever that is.