This week I wrote a gem called stock_info. I made a short video demonstrating its functionality:
This post describes a little bit about my process writing the gem. After the initial setup process that I describe in this post, I was both anxious and elated. I was a little overwhelmed because I had never created a program from scratch before, but excited because of all of the possibilities of what I could create.
It was time to decide on a subject for my project and proceed to write it into a gem that might actually be useful. Some background info: I like to trade stocks. I enjoy researching companies and keeping up to date on all the recent news that might affect the market. Sometimes it can be tedious to go all around the web looking for this information though. So I created a simple CLI app that I can use to get some basic info on different stocks quickly and easily.
I started by creating a CLI menu object and two data objects “news” and “stock”. My intention initially was to store all of the stock and news objects into a class array and search these arrays for data before re-scraping, but I realized that this wasn’t terribly useful because the data is going to change constantly during trading hours. With this in mind, I decided to make the gem only scrape during market hours. In other words, the gem scrapes data on initialization, but only scrapes again if the market is open, otherwise, it will just return the previously stored data. The gem still scrapes for news articles on ‘refresh’ command. However the gem stores the retrieved articles, there is just currently no command to print the stored articles. I assumed that the user would always want the latest news, as new articles are released 24/7. It would be easy to implement a feature for the user to manually display the previously retrieved articles without scraping, but I personally cannot imagine it being useful so I chose not to implement it. It would add unnecessary complexity to the gem.
In the tutorial provided by Flatiron, Avi Flombaum suggests for us to lay out the complete logic of our code before we actually implement scraping data from external sources. I found this very difficult so I actually began pulling data very early in the development of my gem. I couldn’t quite understand how I was going to manipulate the data without actually having data to manipulate. Once I was actually scraping data, the program developed itself.
One feature I am particularly proud of is the functionality to open an article in the web browser. I knew it was possible so I dug around on Google for a while until I found a way to implement it. At first, the code threw a long error, but I managed to also find a way to suppress this error as it was inconsequential to the gem’s functionality. Overall I enjoyed creating this gem immensely and am excited to keep working on it as I will use it regularly.