Welcome to the website of an amateur research and development center! Started off with game development, Sadale had developed pretty much everything, including games, music and research works. In the future, more products will be developed (well, as if anyone would care about that).
All products mentioned in this website are my own work, unless stated otherwise.
Our core value: Development in Stability
Online Middle-Square Method Generator - A web tool that I couldn't find elsewhere. [Link to the Tool]
animeVPS - A low-end, donation-based VPS service that I'm hosting. [Website]
歡迎嚟到薩地魯嘅網站. 曾經係業餘遊戲開發者, 今日薩地魯利用佢嘅業餘時間, 乜都整餐懵. 包括遊戲, 音樂, 研究項目等等.
絕大部分嘅blogpost只有英文版. 當然, 都有D有廣東話版嘅.
核心文化: 穩定為先, 開發為本
Bonvenon al la retejo de Sadejlo. Sadejlo estis amatora kreinto de ludoj. Sadejlo nun kreas ludojn, muzikojn kaj esploradojn. Ĉi tiu retejo estas plejparte Angla, kun iomete Esperantaj skribaĵoj.
Slogano: Stabile Kreadu
This year, I have done another Global Game Jam. I did it in Hong Kong again. Mainly because I was being too lazy to try out other jam site for this year. :P This year I had done something different. I did music instead of programming.
I had spent quite a while for practicing using Musical Palette - Melody Composing Tool, LMMS, LabChrip, sfxr and Audacity. I've figured out an efficient method to produce music. That is to come up with melody and chord harmonization by using Musical Palette, then import them into LMMS to further process it. I managed to produce a few pieces of good quality 1 minute music, each of them was produced within 24 hours.
For the LabChrip and sfxr, it's nothing more than about using the randomizer and manual fine adjustment of the parameters. And Audacity is even easier. It's just useful for noise cancellation and applying effects.
Just like the previous years, I came to the site without a team. As I planned to do music this year, it isn't possible for me to do it alone. So I sought for a team right after I entered the jam site. I tried requesting joining a random team by asking them and got politely rejected. Then another team with three existing members waved at me and asked if I was alone. I answered yes, told them that I made music and got accepted into the team. Then I had a dinner provided by the organizers. Here's a pic with more than 300 jammers begging for free food:
In the midway of our game design discussion, two of the team members had left their seat temporarily. Since I had no idea about the roles of other team members, I asked the remaining member about their role. He told me that he did art, one of the other team member did programming, and when he tried to explain the role of the last team member, his was like "uhm... uh... he's... uh... good at coming up with, uh... uh... ideas and presenting, uh... the ideas". :P Then I ended my question with "Ah. He does marketing. That's good." What I thought was that "He gotta be an idea guy!" :P
When all of the members were back to the seat, we ended up with a consensus on the game design timely.
Then we started connecting to the internet with WiFi. It was very unstable. Then I tried using mobile data by USB tethering with my smartphone. Surprise! Even mobile data is stabler than the WiFi connection provided by the organizers. Since I had a data cap, I had to use my data wisely. So no youtube for me.
I had started to draft a piece of music in the first day. Musical Palette were used for drafting the music.
In the second morning, I managed to caught the shuttle bus provided by organizers and arrived at the jam site early. I had breakfast. There's some open area in my jam site. It's rather interesting to see those people standing and eating outside. Some of the jammers had a bit distance in between possibly because they don't know each others.
After the breakfast, we got back on working. I gave the on-site WiFi another shot with no luck. And I used my mobile data again.
Then I was sitting along with the marketing guy. I sporadically took a peek on what he's doing. That was funny. Most of the time he had his Mac laptop playing youtube video, surfing facebook or chatting with instant messengers. At the same time he was holding a smartphone playing games on it. That was impressive. He was taking multitasking to the next level. To be fair, offering critical opinion requires playing others games. Anyway, he did spend a bit of time to look for info about how to make an awesome trailer for games, and studied about the good indie games.
Then the real fun begin. I continued making the music and completed composing with Musical Palette. Then I started working with the same piece of music with LMMS. However, it didn't went as smooth as I thought. In the midway of making the music, I asked my teammates to review it. Then the marketing guy complained that. He demanded a piece of music with abstract wordings that I couldn't understand. :P As you know, musical stuffs is difficult to be described by words.
After a while, he provided me an example of music that he's interested in. That was the BGM of Plants versus Zombies. It was a piece of music with drums without melody.
Then I toss away the old piece of music, made a new one and came up with this in Musical Palette:
I had never made this sort of music before. Using the same chord over and over again for 16 segments (or phrase in that program). Chord variation techniques were used. It isn't the sort of music that I like. The music sounds extremely stressful like playing airport traffic control games. But it does fit into the theme of the game. Then I mastered the piece using LMMS:
That's crazy. The same chord is played for a long time as shown above. I had added some sidechaining to it. Then I showed my teammates this piece of music. And they're ok with that. It seems to me that it isn't perfect to the marketing guy. But apparently he compromised and told me that this music is ok. That's possibly because of worry of time constraint.
Then I started working on the sound effects. That was rather easy to me. Depending on the sort of sfx, I used LabChrip, sfxr or remixing recorded voice with Audacity.
Working with LabChrip and sfxr was easy. Just click on those randomization button until I get a sound that's close to the one that I want. Then I adapt it a little bit and that's it. Using Audacity is still easy, but it has a bit different workflow. For the sound effects that's simple, I just performed noise cancellation and reverbed it. For sound effects that has like having many people saying the same something, like wow, or laughing sound, what I did was to record the sound for multiple times myself. Then performed noise cancellation, overlapped those recorded sounds, and reverbed it a bit and here we have it.
Speaking of noise cancellation, I'm rather surprised at the noise cancellation capability of Audacity. Behold a screenshot showing the power of the noise canceling effect:
We were in a noisy room with a lot of discussion from parallel teams sitting right next to us as shown below:
The result was brilliant. I seriously thought that I had to get out of the room for recording the voice. I'm not sure if it's solely Audacity, or it's that I was using a headset (Kingston HyperX Cloud Core) that allows me to put the microphone closely to my mouth for clear recording. But still, considered the environment noise, the result of the noise cancellation was very impressive.
At the end of the second day, I went back to home. All of the teammates claimed to stay overnight.
In the third day, I woke up a little bit late. So I couldn't get on the shuttle bus. I arrived the jam site by myself. After arriving at the site, I've found that our programmer had got back to home last night. And he would be doing remote working for our team for the day.
The third day was relaxing. There wasn't much for me to do. After goofing off for a while, our team had started producing the promotion video for our game. The rule of our jam site is to make a one minute trailer for the game. So I had make a piece of music dedicated for the trailer. The art guy did the video and synchronized the text with the music. The resultant video was quite good. The programmer had uploaded the game and our team members were happy about the game and the trailer. I haven't had time to try out the game that our team made, tho. That's because I was on linux, which is unsupported by the game.
After that, the marketing guy had come up with a story written in the description of the game. Then I extended the story with better wordings. Right after the game submission deadline, the artist and the marketing guy left because they were exhausted of staying overnight. At the end the marketing guy didn't do much other than giving critical opinion and discussing the game design. It seems to me that that guy is more like a quality control guy.
I had got a bit of time to chat with a few interesting jammers on the site. Before I had the time on socializing with them. presentation session had started. It mainly comprised of playing those 1 minute trailers. When it was my turn, I was rather surprised that the volume of our trailer were so low. I don't know if it's the audio of the vid itself, or the staff had set the volume level to too low. After that, the sponsors gave awards to some well-performing teams. And that's the end of the Global Game Jam of 2018 to me.
We took shuttle bus to a major metro station and got back to home.
After getting back to home, I tried playing the game that our team made. I was rather disappointed with the game. It was severely bugged and it isn't even remotely playable. Despite that the trailer looks good, the game itself sucks. Then I tried out other games produced in our jam site. They had similar problem. Super buggy. Not playable. Good or mediocre trailer, but no fun to play with at all. Wow. Seriously?
I guess now I've finally figured out the truth of Global Game Jam in Hong Kong. Almost all games here sucks with funny or semi-interesting trailer. Many jammers are just interested in getting an award. That's something that I truly hate because it isn't what a game jam about. If you like 48 hours game development competition, you could have just joined Ludum Dare!
I don't know. If I have enough spare time, I should seriously consider joining Global Game Jam elsewhere next year. The Macau one seems to be feasible because the guys there speak Cantonese, so there wouldn't be any language barrier to me. And the bridge connecting Hong Kong and Macau should be completed by next year. That'd make it easy for me to travel to there. :)
Despite that this jam isn't perfect, it's good enough. I'm happy about it. :)
Now I've a few pieces of music laying around. Some of them were produced for practicing making music before the jam. One of them is incomplete and was produced during the jam. Perhaps I can write a few songs with those music in the future.
And I also have some sound effects produced. Maybe I can create an asset pack with them.
Start your new year with the song that I've just released! Announcing Axial Inclination, the first English song that I've ever produced! This song is released under CC BY-NC 4.0 license. The source files of the music is available at the end of this blogpost.
Just like the last song, I made this alone. The music were made with LMMS. The workflow of making this song is a bit different from the last one. This time I use diffient harmonization over different part of the song so that it sounds less dull. I'd also carefully mixed the instrument tracks to add some dynamics to that. Hopefully it makes the song sounds better than the previous one. :)
Here's a screenshot during the production of the song:
In addition to that, I've modified espeak-ng, a text-to-speech engine, for generating the vocal, which is later mixed into the song using LMMS. It is reported that the lyrics sung by this vocal is difficult to be understood. Nevertheless, it's still awesome to have an FOSS vocal synth.
I've been busy lately. I'll release my changes of espeak-ng when I got time. This espeak-ng vocal thingie deserves a separate blogpost which will be available later this year. :)
CC BY-NC 4.0 license grants you permission to redistribute and modify this music for non-commercial purpose, provided that you give credit to "sadale.net". If you do not wish to attribute to sadale.net, or you'd like to use it for commercial purpose, please contact me with the email button on this website. Let me know what you'll be using it for. It's highly likely that I'll grant you the permission for using this music.
呢首係我第一首公開發放嘅歌, 歌名係"呢首歌嘅歌詞好奇怪". 首歌係關於呢首歌同埋佢嘅歌詞有幾咁唔掂嘅.
Behold my first song ever released publicly: "The Lyrics of This Song are Weird". This song is about the suckiness of the song and its lyrics.
首歌係我自己一個人整嘅(包括把聲同段片). 整左超過一個月. 如果你鐘意嘅話, 麻煩幫段片俾個like, subscribe個Youtube channel同埋share下俾你D friend. 如果呢首歌嘅反應良好, 我響未來將會整多D類似嘅歌. 多謝支持! :)
I made the song (including the vocal and the video) on my own. It took me more than a month to produce it. If you enjoyed the song, please give the video a like, subscribe to my Youtube channel and share it with others. If the reception of this song is good, I'll spend more time to compose similar songs in the future. Thank you very much! :)
我原本係為左整game而學整音樂嘅. 直至到幾年前, 我大概每一年就整一首廣東話歌. 呢首係我整嘅第三首歌. 之前果兩首太差, 所以我冇放到出來. 呢首我覺得唔算好好, 但係都算係咁啦. 所以我就決定放出來喇.
I learned music composition for game development. Since a few years ago, I compose Cantonese songs approximately once a year. This song is the third song that I've made. I didn't release the previous two songs because the quality were too bad. This one isn't good. It isn't that bad either. That's why I decided to release it.
呢個project總共用左8個軟件. 包括LMMS, Audacity, ProjectM, SimpleScreenRecorder, Inkscape, Spriter Pro, Aegisub同FFmpeg. 除左Spriter Pro之外, 其他都係免費嘅開源的軟件.
In total, 8 software applications were used for this project. They are LMMS, Audacity, ProjectM, SimpleScreenRecorder, Inkscape, Spriter Pro, Aegisub and FFmpeg. Except Spriter Pro, all of them are FOSS software.
This music was composed with LMMS. It's a music composition software designed for making music from scratch. I had briefly contributed to its development a few years ago. The voice was recorded and processed using Audacity. Some audio engineering was performed for the vocal track.
The procedure of composing this song is shown below:
我無正式學過作曲. 如果我用錯D專有名詞嘅話, 麻煩同我講聲. 我會更正. 另外, 以上步驟唔係唯一嘅作曲嘅方法. 以前我都用過其他方法作曲. 呢D步驟只係想解釋返我點整呢首出來嘅姐.
I haven't formally learned music composition. If I have used the wordings above incorrectly, do tell me. I will fix them. Please notice that it is not the universal solution of music production. I have tried out other approaches in my other songs. The steps above is just what I did for making this specific song.
Here's a screenshot of the song being edited in LMMS:
作廣東話歌同其他語言嘅歌有一個好大嘅分別. 就係要啱音. 我地有成6個音, 要對返個melody其實都有D難度. 仲要整到個歌詞嘅意思都要啱, 難上加難. 所以我作作下就卡死左喇!
Since Cantonese is a tonal language, I have to match the pitch of the melody notes with the tone of the Cantonese characters. Therefore, it's very tough to compose a Cantonese song as it's difficult to find the right word that has both the correct tone and the correct meaning. I was stuck in the midway while I was writing the lyrics.
Therefore, I had developed a tool to do this for me. The tool was written in Python. By using three Chinese word databases that I found on the internet, combined with the tone of the word suggested by Chinese Character Database: With Word-formations (which I had further modified manually because some of the tones were changed in the modern Cantonese we're using today), a list of words with matching tones is generated:
由上面嘅cap圖可以見到, 呢個工具可以列出個database嘅啱音嘅字(但係有小量錯誤). 例如我打240, 就可以搵到同"240"同音嘅字. 包括"亂晒籠, 垃圾蟲, 定晒形, 未夠喉, 滑鐵廬, 落晒形, 鼻涕蟲".
As shown on the screenshot above, the tool lists out all matching words that it found (with some minor errors). For example, if I type "240" (二四零), the first character of the word has to match the tone of 二, the second character must match 四, and the third character must match 零. The filtered result of the words is displayed, which includes "亂晒籠, 垃圾蟲, 定晒形, 未夠喉, 滑鐵廬, 落晒形, 鼻涕蟲" in the Chinese word database of Cantonese.
個工具仲可以俾你揀database. 目前個工具有廣東話, 大陸普通話, 同埋台灣國語嘅詞語嘅database. 雖然唔係所有歌詞都係用呢個工具作嘅, 但係呢個工具用來做brainstorming同埋搵D啱音嘅四字成語真係無得輸. 我以後應該都會繼續用呢個工具來填詞.
Database selection is also supported. The current version of this tool supports Cantonese, Chinese and Taiwan-style Chinese. This tool helped a lot while I was writing the lyrics for this song. Although I did not exclusively use this tool for writing all of the lyrics, it was very useful for brainstorming and finding Four-characters Chinese Idioms that match the tone that I want. I will probably continue to use this tool for my future songs.
不過好可惜, 我唔清楚部分database嘅使用條款. 所以我係唔可以放呢個工具出來嘅. 同大家講返聲唔好意思先.
Unfortunately, due to the unknown license of some Chinese words databases, I could not release this tool publicly. I'm sorry about that.
填完詞就用Audacity錄音同改音. 下面幅圖係我改音嘅過程. 只要複製highlight左果part就可以延長隻字個音長. Delete左果part就可以縮短隻字個音長. 呢個步驟不停重複, 直至做到把聲同首歌同步為止.
After the voice was recorded according to the lyrics written, Audacity were used for audio engineering. The image below shows how did I change the length of the Cantonese character that I had sung. By duplicating the highlighted part, the duration of that character is extended. Conversely, by deleting that part, the duration is reduced. This process was repeated for synchronizing my voice with the melody.
之後Audacity整出來嘅track會放返入LMMS裏面. 最後首歌係用LMMS generate出來嘅.
After that, the vocal track generated by Audacity was imported into the LMMS project. Finally, the song was generated by LMMS.
After the completion of the song itself, a video for this song was produced.
The audio visualizer software used in the video used was ProjectM. It is a real time audio visualizer. The visualization were recorded using SimpleScreenRecorder, which supports recording OpenGL output of any program. Compared with software-based recording method, recording the OpenGL output is much more efficient. Therefore, the output video is almost lag-free.
The image of the banner was created using Inkscape. After that, it was animated using Spriter Pro. Image sequences of the animation were generated. The subtitles editor used for the video of this project was Aegisub. With this software, a subtitles file were generated. An extra image were drawn with Inkscape for the end scene that asks the viewer to subscribe my channel. Finally, the audio outputted by LMMS, the visualization of the audio, the banner image sequence and the subtitles were combined by using FFmpeg. That's it!
我係作曲新手. 呢首個我花左好多工夫, 整左成超過一個月. 如果你鐘意嘅話, 麻煩幫忙share一下. 如果你地鐘意, 我日後會整多D呢類嘅歌. 多謝支持!
I'm rather new in song production. More than a month of work was spent on producing this song. If you enjoyed it, please take a moment to share it. Similar songs will be produced if the reception of this one is good. Thank you very much! :)
除此之外, 我都歡迎未來合作. 如果你有興趣, 可以send封email俾我架(email見網頁頂部)!
In addition, future collaboration is welcomed. Feel free to drop me a line by using the email address on this website.
This year, I've developed the game E. M. Wave Jammer. It is the world's first telephone game in Global Game Jam, which is playable by dialing a telephone number.
I FUBAR'd in last year's game jam. Fortunately, I did much better this year.
This game is for entertainment only, no political message intended.
For Hong Kong SAR phone numbers, dial 54839953. For non-Hong Kong SAR phone numbers, dial +85254839953 with skype. We do not accept international non-skype calls to save our operational cost. This game has Cantonese(Press 1), Mandarin(Press 2) and English(Press 3) version. Please notice that this phone number is temporary. It will be changed after I've finished setting up my phonesite.
The game happens in Japan during Cold War era. In the game, North Korea is using advance electromagnetic waves technology for sending signals to Cuba. They are plotting to attack Japan. In the game, the player plays as the role of the commander of telecommunication department. The player is responsible for jamming the signals between them.
For the ease of command, Japan is divided into 6 zones. The electromagnetic waves from North Korea will propagate via zone 1, zone 2, zone 3, zone 4, zone 5, zone 6, all the way to Cuba.
The player is required to use limited amount of electricity to build jammers. Electricity are consumed for building the jammers. No electricity is required to operate the jammers. The more electricity you spend to build a jammer, the more powerful it is. For example, a 5W jammer can attenuate the signal by 5W.
The remaining non-attenuated signal will become military information end up on enemy's hand. When the information level reaches 100%, you lose. The information level is increased by the Wattage of signal received divided by the Wattage of signal sent. There is no way to reduce the information level.
The player starts with 20W of electricity. To generate electricity, generators have to be built. They generate electricity when signal pass thru its zone. For example, a 5W generator will cost 5W to build, and generates 5W of electricity.
Each zone can only have one structure(e.g jammer, generator). New structure building on a zone with existing structure will demolish the existing one. Structures cannot be sold.
After wave 6, there will be Accelerated E.M. Wave signal. It is able to bypass Zone 2, 4, 6. It is sent via Zone 1, 3, 5 to Cuba.
After wave 11, there will be Narrow-band E.M. Wave signal, which includes low frequency E.M. Wave Signal and high frequency E.M. Wave Signal. Ordinary jammers are half as effective against these signal compared with other signals. Therefore, the player is able to build LF Jammers and HF Jammers to defend against these signals. A 10W LF Jammer can attenuate LF Signal by 20W, Ordinary Signal by 5W, and cannot attenuate HF signal. A 10W HF Jammer can attenuate HF Signal by 20W, Ordinary Signal by 5W, and cannot attenuate LF signal.
After wave 16, there will be E.M. Wave from Cuba to North Korea, which propagates via Zone 6, 5, 4, 3, 2, 1 to North Korea. Accelerated E.M. Wave from Cuba propagates via Zone 6, 4, 2 to North Korea.
After wave 21, there will be FM Signal. FM Signals is immune to Ordinary Jammers. It makes non-LF, non-HF FM waves very troublesome to be dealt with because they couldn't be attenuated by ordinary jammers. Yet, HF and LF jammers are only half as efficient to deal with non-LF, non-HF waves.
Before the Jam, I've developed the hardware Dinbo Prototype B as well as its library libdinbo. I've also made a template for developing any telephone system based on this library.
In addition of the telephone system, I have also practiced using LMMS, Labchrip, Audacity, SoX, just all of the software that I planned to use. I also planned to practice Aegisub for making video subtitles. Unfortunately, I didn't have enough time and the Jam day had come. :(
Just like the last year, the 48 hours game jam spanned across three days.
In the first day, I've arrived at the venue, before the event started, I've found a bug on the library of Dinbo Prototype B that made it failed to detect DTMF touch keys properly. Fortunately, I fixed it right before it started.
After that, I've listened to the briefing. After the theme Waves was announced, I came up with this game design quickly. Since I was working solo in this jam, this process went way quicker than the jam of previous year. :)
Here is a photo of my game design document. As you see below, it is only one A4 page. It is poorly-organized because I'm the only intended audience of this document. :P
Some of the features are strike'd out at the design stage because I foresaw that it couldn't be completed within 48 hours.
Here is a photo taken from day 1 of the event. Sorry for phone camera quality:
There were many jammers in our Jam site, and the Hong Kong SAR was the 8th largest out of 700+ jam sites all over the world in 2017!
With the experience of last year, I won't sleep on-site because I couldn't. At the end of the day, I went to home and took a rest.
Although I slept much better than last year, I still didn't sleep enough. Therefore, I woke up late. I started programming. Feeling dizzy, I took a nap after an hour of development. Then I woke up again and continue. To save some traveling time, I decide to jam at home this day because I was solo and I didn't need to go to the site to collaborate with my teammates.
With software emulation feature of the libdinbo, I was able to develop the game without dialing the phone number. It saved me some cost of calling the number.
With existing code base, the development went smoothly. I managed to implement the game play. After midnight, I started translating the game into Mandarin as well as English. I have also synthesized and recorded some sfx. Originally I also planned to make music and recording the Cantonese voice myself. Considered that I need to prepare for the presentation, I decided to drop these features and went to bed. All of the development works ended here. However, I still haven't deployed it on the Dinbo Prototype B hardware.
I (almost) dedicated this day for preparing for the presentation. I decided to prepare for the presentation before deploying it to the hardware. That is because the deadline of submitting the presentation was very tight. And preparing presentation before the deployment can buy me a bit extra time.
I started with recording the gameplay audio using the voice log function of the libdinbo in its software emulation mode. It went quite well and I've recorded a 9 minutes audio. I had cut down the audio to 4.5 minutes.
Then I asked the volunteers about the time limit of presentation. Turned out that each team would only be given two minutes to present their game. Well, I thought that I had 5~10 minutes. :/
After that, I further cut down the audio to 1.5 minutes. Even the gameplay instruction were removed.
So how did I explain the gameplay? Simple. I used Libreoffice Impress to make some presentation slides to visualize the game play frame by frame. Then I played the gameplay audio and used vokoscreen record the voice and the presentation slides. I clicks as I record and play the gameplay audio. After that, I used ffmpeg to trim beginning and the end of the video. Then convert the video format to webm. Here is the demonstration video in Cantonese.
After that, I prepared another powerpoint file. I planned to use the powerpoint before playing the demonstration video.
Everything went well, except that the deadline was very tight. I had to do everything I said above within like 3~4 hours. Then I uploaded my video to the website of Global Game Jam and sent the powerpoint to the staffs. After that, I have deployed my program to the Dinbo Prototype B hardware.
Here is the presentation session(the guy who is presenting in the photo is not me):
It didn't went very well when it was my turn. I thought that I could access a presenter's mouse so that I could show my powerpoint with animation. I thought that I could click the powerpoint myself. Unfortunately, the presenter's mouse wasn't working somehow. Then the staff of my jam site just random clicked his mouse, causing the slides shown up earlier than it was supposed to be shown. After that, the staff was trying to click on the gameplay demonstration video link inside the last slide of my powerpoint. But he forgot to enter presentation mode and couldn't click on the link. It looked very bad for the audience. :/
Nevertheless, the gameplay video was found to be funny by many fellow jammers. I enjoyed their laugh and applause at the end, and I earned a certificate of participation. :)
After that, I have introduced my final year project to my fellow jammers. Then I have interviewed some of them about my project. That is helpful for me to improve my final year project. :)
Finally, it was the closing ceremony of the event. As I have expected, I got no award because I'm solo. Apparently the sponsors of our site is reluctant to give out awards to solo teams. :P
Anyway, I have completed the game during the 48 hours. I have proven that libdinbo is working and I have shown my final year project to others. It is a great success compared with the previous jam.
After the jam, I have talked about the event with people from other jam sites via the Internet. Someone who joined the site of Tokyo University of Technology shared an interesting photo of the site(photo used with permission by the copyright holder of the photo):
Apparently the jammers in Tokyo University of Technology site had more fun than we do. Instead of giving out awards to well-performing teams, they had a pizza party! And the award was the game itself that the jammers developed. It is a better match to the sprite of Game Jam.
I'm performing much better in this game jam than the last one. And this year is much more fun for me. Here is what went well:
Overall, The Jam of this year went pretty well. And it was a quite memorable experience. :)
The Game Jam of this year is very special for me. It has some strategical value. As you may have noticed in my previous blogpost, Dinbo Prototype B will be a successor of the existing telephone system, Dinbo Prototype A. Dinbo Prototype B will be used for the following purposes:
This game jam helps improving the code base for Dinbo Prototype B, particularly, the internationalization functionality of the code base was enhanced during the jam. In addition, it's also a good way to test whether the entire system works. If a game can be developed for this system, then it would definitely be possible for me to develop my final year project using the same system.
That says, my mission of Global Game Jam 2017 is accomplished. Now I got to work on my final year project as well as my phonesite. :D
Want to read more? A parallel jammer in Japan had made a blogpost of his game - Super Smash Tokyo
Hey guys! Finally I got time to blog about the technical details behind the whack-a-mole game.
Click here to view the previous part of this blog post, which is a release announcement of this game.
The game is powered by Dinbo Prototype A, which is a telephone system that I developed using SIM900A module with Raspberry Pi 3.
The schematics diagram of the system is shown below:
I knew. This system is stupid. Instead of connecting the mic and speaker of SIM900A with Raspberry Pi, GPIO is used for voice communication between them. To make the thing even funnier, an ATtiny13 is used as an ADC.
Anyway, this is just an early prototype. I just wanted to tinker around with the electronic parts that I have. And this design suits the purpose very well. More importantly, this system works. :P
Here is how does the SIM900A and ATtiny13 look like after everything is connected:
As shown above, the entire system is deployed on a breadboard.
Due to the high current requirement, multiple breadboard wires is required for the power supply of SIM900A. A capacitor is also connected between VCC and GND to smooth the voltage level over time(not shown on the outdated photo above).
Many programs were written for this system. They are ATtiny13 ADC program, serial multiplexer program, voice to socket program, and the whack-a-mole program. All of these programs are written in C.
The ATtiny13 ADC program, as its name suggests, is a program installed on the microcontoller for the purpose of converting the analog voice signal from SIM900A to time-based digital signal. When a "get sample" signal is received by PB1, it will read the analog input of PB3 send a digitized signal via PB4 to Raspberry Pi. This is the first non-Arduino embedded program I have ever developed. I had some fun on reading the specification of ATtiny13. The library avr-libc was used.
The voice to socket program is a program that converts the digitized audio signal received from the ATtiny, then send it to the whack-a-mole program using unix socket file. This is the first real time program I have developed in my life. To achieve real time execution of code, a CPU core is reserved solely for the linux process of this program.
The serial multiplexer allows multiple programs write to the serial interface of SIM900A. It redirects all of the data received from socket to the serial interface. It is adapted from this program found on Stack Overflow.
The Whack-a-mole program works by communicating with the serial multiplexer as well as the voice to socket program. It also detects DTMF tones by processing the audio signal received. Other than that, it is just like other C programs.
The source code of these programs are poorly organized. I still don't have time to package them. There are also some copyright issue on the serial multiplexer because majority of code is taken from the Stack Overflow answer with unspecified license. Therefore, I cannot release these programs too publicly. However, a copy of source code can be requested by email and they are considered on case-by-case basis.
For the final year project of my college, the development of Dinbo Prototype B is started, which will be a successor of the current system. Standard analog audio interface will be used(instead of doing the analog<->digital conversion hack using GPIOs). I also plan to solder it on a perfboard.
The library will be written in Python, which is much more flexible than C. It will be mainly designed for non-game telephone systems. However, it should be possible to make a game with it. After the completion of the library, I'll port this game to Dinbo Prototype B. I will update you guys about any news on it.