Jump to: Navigation
In this post I’ll be reviewing what has happened since my last post from early January. You can read up on the previous review by clicking here. To give you a summary, I used payed advertisements to find my first users for Kanji Book - a site to learn Japanese kanji using mnemonics.
Today’s post is again about my little side project called Kanji Book. If you read my last post on Promoting a side project without an audience, you will know that I have figured out a way to find initial user for my project even without being an internet celebrity. The site should be doing great by now, right? Well, it turns out that people who visited my site and felt compelled enough by my value proposition of an easy Japanese learning tool didn’t quite get the simplicity of the app and mostly bailed out after signing up. 50% of those sign-ups even went as far as verifying their email address and enrolling for my newsletter. Therefore something has to be wrong with the website.
So, I looked at the numbers for Kanji Book’s prospective impressions using Google Adwords. I finally understood how Google’s keyword planner worked thanks to my ignorance not to read their documentation first. As it turns out, the demand for learning Japanese is a little smaller than initially thought.
First of all, let me introduce you to my little side project called Kanji Book.
Update: While writing this article, I actually found the best solution on Stackoverflow: How can a variable height sticky footer be defined in pure CSS
At the time of writing this post I am in the middle of developing a Rust program using
#![no_std] which means there are no
Box types available. Nonetheless, there is the possibility to implement a custom allocator which then allows the use of
Vec as well. That’s what I did in my project. Within my project, I have to iterate over various data types most of the time which just ask for various
Iterator implementations to abstract away some of the complexities arising from some data structures like the following:
The last few days I was constantly thinking about AMD’s future Vega GPUs. So far we only know little but they actually gave us a lot of data spread across different places. Therefore we can actually extrapolate Vega’s performance. For this purpose I’ll actually take 2 supporting arguments which I’ll outline below. If you think my reasoning is not sound, please provide adequate explanations and if possible reputable sources to support your objections (on this Reddit post).