A couple months ago while I was trying to make some decent progress on Nuubz when I came to a point that I realized I needed to start and do some serious work on an unrelated project before I could advance what I wanted to work on. In particular, I wanted to move the system/site security forward, which I hope will lead to better comment spam prevention and better all around security. I already had a plan in place, I just needed to actually implement it.

Enter Project Indigo. Or at least *MY* Project Indigo.

Some years ago, while working at a web hosting company, I noticed that people kept trying to break into a site of mine for which I literally had no content. There was a single, simple HTML file saying “There’s nothing here yet.” So, I quickly wrote a script and database to record those attacks. That system has been tracking these attacks for 6 years. Last year, I noticed an abundance of brute force ssh attacks on the server as well, and started recording those in a separate system. I decided to put this data together in a security web site project to help the masses, and myself, but I just didn’t get around to doing it until Nuubz prodded me to do so.

So, I put an old domain name I owned to use and Project Indigo was born. I still have a lot to do on it, including actually providing some useful information beyond some statistics on the home page, but as you can see, it’s receiving live information currently from two virtual private servers. (I’m getting ready to shut one down, however.) There have been over 700,000 SSH attacks detected and reported to the system as of this moment, while only 2,700 “404” attacks. I emphasize “404” attacks because these are just pure page not found attacks; in my honeypot site, these are requests for pages that don’t currently and have never existed on the site, and don’t have any additional attack parameters. There’s another similar attack that I’m simply calling “web attacks” that aren’t yet reported, these are (again on my honeypot sites) page requests with GET, POST, and/or cookie values that were never requested, used, or expected on the site, regardless of whether the requested page has existed or not. (Again, on the honeypot site, most of the pages that have been targeted have never existed.)

I’m still debating whether I should try to make a business out of this or not, but I’m willing to accept donations. I’ll provide that information when I make it possible to register an account on the site and put a little more polish into it. In the mean time, some attack data is available on Google if you search for “site:prjindigo.com” and machine readable data on a given IP address is available at https://www.prjindigo.com/data/<ip address>.json . Both IPv4 and IPv6 addresses are supported though I’ve only seen a few v6 addresses enter the system at this point. (Be sure to URL encode IPv6 addresses.)

I have created a Github repo for the honeypot software, which is still in active development as well, and I’m working on a Go language program to report the data and possibly parse log files to get ssh failure data. (I’m still unsure about using Go to parse that data as the log files may change from OS to OS.) Don’t rush out and clone either repo yet, both depend on client identifiers and encryption keys that depend on having an account at the Project Indigo website, which, as I indicated above, isn’t quite ready for that yet. But I’ll be sure to post here when the time has come.

It’s been a while since I wrote anything here, and probably longer since I actually made any real progress on Nuubz. As was the case in 2017, life has been kind of hectic thanks to my day job. (Example, I spent roughly 2 months out of town for work out of the last 3.5 months of the year.) Well, over the last 10 days or so, I’ve made some pretty good progress.

First and foremost, I got back to work on the OAuth2 system, and I’ve managed to actually receive usable account data from both Google and Facebook with it. I have received some limited user account data from Twitch, which may just be an issue of expectations, but I’m also reconsidering Twitter support. (Up until now, I’ve thought that the lack of email or real user information in their responses made supporting Twitter pointless, but as long as an account ID is supplied, I guess it’s enough to identify a particular Twitter account.) I have updated the publicly available open source versions of these OAuth classes I implemented over at GitHub.

What I’m working on at this precise moment is encryption support. Up until now, I’ve been using mcrypt to encrypt email addresses and other important tidbits, but as of PHP 7.2, that’s deprecated. Yes, you can fight to get it installed as a PECL extension, but if you’re using the IUS version of PHP, that is exceptionally difficult to do since they don’t include a PECL binary. (I’ve done it on this server, but I can’t remember precisely how I accomplished it to tell anyone else or repeat it for my own development server.) So, I’m switching over to use Sodium, the [current] new wave of the future. In general, there are fewer commands to use to accomplish good encryption, however, it’s also more tedious because if you actually want to decrypt something you encrypted with it, you not only have to have the key, but you also have to have the nonce that was used. Which means you need to store both somewhere. There’s a trade off between security and practicality that has to be made as a result. A site I read while implementing this literally said “never use the same key and nonce twice” but how can you encrypt important data on a website without using them twice or more? That I haven’t figured out yet.

Once I get the new encryption working, I can officially implement the OAuth2 registration and login paths; I am hesitant to store the OAuth2 provider’s access key, client ID, and client secret in the database unencrypted. While I could do that on disk in the configuration file, I’m trying to minimize the important data that’s present there for fear of misconfigured web servers or clever exploits of the code I’m writing. I’m trying to be as mindful as possible of potential exploits as I write Nuubz, but there’s always something that you overlook as a programmer, and always potential and actual bugs in the software that yours depends on. Of course, once an attacker has access to the database, it’s all over. That could come from SQL injection (hopefully a path eliminated by proper use of PHP’s PDO database abstraction which I’ve used for years), some sort of cross site exploit that might elevate privileges, or a shell script of some sort that gives them access to the files on the server as if the user running the server. (Usually “apache”, “httpd”, or “www”.) Nonetheless, security is on my mind as I code.

Finally, for this update at least, I’m looking for some comic strips to post as a demo once I get that far. I really wanted to steal… ahem, borrow some strips from comics I read (Grrl Power Comic, Least I Could Do, LFG, TMI Comic, Megatokyo) to for the demo, but I think it’s better if I get some creator/writer/artist to volunteer some of their work or at least give me permission to go through their stuff. It doesn’t even have to be a real comic or part of a regular series, I just need something to showcase for Nuubz. Eventually. I’m hoping this will be the year that I have something usable to demonstrate, and not just a bunch of code tests. If you’d like to help out, drop me a line with the form below.

Beyond that, happy New Year!

I’ve been mostly quiet for the last year or so, the main reason being that I’ve been either waiting anxiously for news on an exciting job (which I got) or just plain working it. BLS, as much as I like the idea of it being something that keeps food on the table, really is nothing more than my hobby at best.
That said, I haven’t had much time to try to catch up on Android development though I continue to be intrigued by it, and I’ve been slow on my web projects. There’s little excuse for it, but work has had a major impact on my activity levels at the end of the day and on weekends.


Not that I’m complaining about work…! I absolutely love my job!


That said, I’ve been trying to get back in the swing of things lately, and in particular have been working on the OAuth 2 implementation I started for Nuubz.


The first question in your mind may be “What is OAuth?” The simple explanation is that is an open standard for communicating with supporting services to allow you to register and login to websites without needing to manually create an account there. You’ve probably seen Facebook and Google login options on many websites already; OAuth is what allows that magic to work.


So why, as I’m sure your next question would be, am I implementing this myself? While there are libraries to do it, the problems are that either they’re difficult to use and/or understand, under documented, or have a license that would get in my way. I’ve tried to use a particular OAuth library I found on SourceForge several times over several years; while I got it to work somewhat, it confused the hell out of me in terms of actual usage, what data was safe to store and how to resume login sessions. The reason, besides the complexity of the library, was that it wasn’t well documented. In fact, the example code they provided literally answered nothing, not even what elements of the code were required.


While I’ve downloaded but haven’t looked at other implementations, I’ve been very hesitant too even think about using them because of licensing. I think I’ve made it clear that I hate the GPL license; I don’t want to make Nuubz open source just because I used a GPL’ed library in the software! If I decide to open source Nuubz, I want it to be because I chose to make it open source! Sure there are probably a number of OAuth libraries that are open source with a compatible license like BSD or MIT and are documented with decent examples, but I really didn’t feel like trying to hunt them down and keep them updated.


So, I decided to write my own. While I’ve been stalled for much of the last year as I mentioned above, I’ve made some important progress this week. As of this moment, my code (available at GitHub) can initiate the handshake and retrieve account information from Google. As the code is very similar for many other OAuth providers (such as Patreon, Disquss, and even Twitch)  only a few relatively minor changes are necessary to get it to work with them as well. Facebook support is coming too, though they have some additional hoops to jump through. I’m still debating Twitter… Last time I looked at implementing support for them, they didn’t provide any useful account information like email address or real name.


Now before you go off and download my code to use it, the latest changes [to make this battle station fully operational] aren’t on GitHub yet; I need to remove some debug code and clean things up a bit but I’ll have it there before this weekend ends.

There are times to love Google and there are times to hate Google. At the moment, I’m in the latter phase, though it may not be for the reason you’re thinking based on the subject of this post and Annette Hurst’s article, “The Death of ‘Free’ Software or How Google killed GPL.” I’ll be clear, the reason I hate Google at the moment is because that headline popped up on my phone in Google Now, not because someone thinks that the GPL has been undermined and destroyed. Larry Ellison seems to have had a longstanding grudge against Google and Android and has been hell bent on destroying the latter, which is the underlying reason this legal warfare began. As annoying as these lawsuits have been, they’re not over yet and the GPL and open source sure as hell aren’t over just because Google won this particular case for the moment. Let me explain a few things, including why Ms. Hurst’s article is wrong before it spreads virally across the internet. She may be a lawyer, but oddly, she has a relatively weak grasp of what issues the case was about. I’m going to try to explain it, why she was wrong, and why I wish GPL was indeed dead.

Lets start with what the case is fundamentally about rather than focus on why it came about.

Oracle is suing Google over their use of the Java API, which was developed by Sun Microsystems back in the 90s. The goal of Java was to allow developers to write software once and run it anywhere. This was made possible by compiling the written code into bytecode which could then be interpreted by the runtime system installed on the local computer to execute as intended. The runtime system is written for the local computer’s operating system so that the Java application or applet can run at near native speed. Ideally at any rate. (A later adaptation called a Just In Time compiler, or JIT, came about to transcode the bytecode into the operating system’s native code so that it is indeed competitive with applications written in other compiled languages.) There are two executable “targets” for Java depending on the developer’s intended method of execution: applet and application. A Java applet is run from within a web browser, and is probably the most common way to encounter and use Java. Well, before Android came along, but that’s technically a different story that I’ll get to later. A Java application, as I implied earlier in this paragraph, is executed directly on the user’s computer via the runtime system. Fundamentally, Java is Java, regardless of whether it’s in an applet or application, though applets are generally speaking more restricted than applications for security reasons. (There are ways to circumvent many of those restrictions, but there’s no need to get into that.)

The problem with Java in general, which ultimately lead to the sale of Sun Microsystems, was that they were pretty much giving it away for free. Sun was a great friend to open source developers and operating system developers, allowing anyone to use Java to develop applets and applications as they pleased, and even develop alternative versions of the runtime system provided that the bytecode generated by their compilers were compatible with Sun’s runtime and their runtimes were compatible with the bytecode generated by Sun’s compilers. This interoperability mandate was important because Sun couldn’t be expected to develop an official Java runtime for every operating system that was out on the market from both big developers like Microsoft, Apple, IBM, and Red Hat and small ones, such as Be, OpenBSD, and all the independent, small timers. Not to mention, Sun had their own operating system, Solaris. In order for Java to truly run everywhere, Sun needed developers on other operating systems to develop their own runtime system and compiler, and made reference implementations available through the magic of open source. The lawsuits have declared that these reference implementations were under a dual license, the GNU Public License (GPL) and a commercial license. I’ll explain GPL later, and why I hate it, but suffice it to say it’s an open source license that has certain rules that need to be followed, but essentially allows anyone to use it for free. Commercial licenses, of course, are intended for commercial use, are usually sold by the license holder, and usually have additional benefits all around. Again, I’ll come back to these licenses later.

Another problem with Java was that it was plagued with security issues over the years, and it ran up against Macromedia’s Flash (Macromedia was later purchased by software powerhouse Adobe) which handled animation  and media better natively than Java did at the time. As security issues appeared and Flash grew in popularity, Java’s use declined. In fact, by 2012, Firefox began disabling Java support by default, forcing users to enable it either temporarily or permanently by a conscious choice. Firefox was joined by Chrome and other browsers afterwards. While Flash had and has its own security issues, it was and still is wildly popular.

When Google bought Android and began preparing it for use in smartphones years before Oracle bought Sun Microsystems. At some point, the Android team decided to use the Java API as the basis for developing applications for their new operating system. Although Android would be compiling code Java source code to a bytecode, it would be doing so to Dalvik bytecode rather than Java’s. The goal of using the Java API  in Android, was to provide a means to develop for Android with a well known, flexible and relatively simple to use language. They could easily have chosen C or any of its more direct relatives like Objective C or C++, assembly/assembler, or even invented an entirely new language. Given that Java was well established and Sun was losing money rapidly, Sun was more than happy to allow Google to use the Java API as the basis for developing apps on their new mobile operating system. Why? Because anyone that wanted to develop an Android app needed to learn Java if they didn’t already know it, and that would boost the use and spread of Java, which could potentially bring in indirect revenue to Sun.

You may be wondering exactly how that would be accomplished given that Java was free. (“Free as in beer” in this case.) Like Google, Sun made money on advertising deals, in this case in the installer of their official runtime, SDK, and JDK they offered to install third party toolbars and other applications alongside their  own software. The more developers installing the Sun runtime and JDK, as was necessary to develop for Android, the more opportunities Sun had to make money.

Then, suddenly, Oracle bought Sun, and started an aggressive campaign to bring down Google and Android through whatever means necessary. The cases went back and forth, but ultimately led us to this ruling this week. Oracle, through Ms. Hurst’s company, was claiming that Google had to use the commercial license for the Java API, not the GPL version, so they were in violation of the license and the law. The problem is, that any individual or corporation can use GPL licensed code freely, and here’s the kicker, provided that they make any changes they implement freely available on demand. So, if Oracle/Sun had created a wooden cube, and Google used the cube under GPL to create a white cube with wiggly lines, Google would have to share with the world how they created the white cube with wiggly lines. This is the — and this is key — viral nature of GPL. In principle and practice, anything that is based on GPL code is automatically GPL itself, and only the true owners/origin of the code has the right to issue the original code under a different license. Since the API was open source, GPL’ed as Oracle claims, the Android API had to be open source, and in particular GPL’ed as well.

And here’s where things get tricky. While I can’t say for sure that the Android API is indeed under the GPL license — a quick look at a source file shows the Apache 2.0 license in a file in the SDK —the Android API is indeed open source and more importantly, that doesn’t really matter. The API is being used as a means to create bytecode that is separate and different from the Java bytecode, despite the fact that the language being used is close to if not completely identical. Android has been open source since before the first SDK became available in 2008, and the code it produces is only intended to be operable within Android devices via the Dalvik bytecode and runtime. Google has added a fair amount of code directly within the Android API, but anything that is critical to Google specific business is downloadable separately from the Android API. So, in the bottom line, Google is using Java’s organization of classes, functions, and interfaces and description of such things in a purely textual sense to describe how things get compiled into their bytecode and used as apps on Android. That doesn’t violate the GPL, and the commercial license that Sun says Google had to use is irrelevant because the GPL itself indicates that it’s not necessary.

More pointedly to Ms. Hurst’s outrageous claim that this Google victory destroys the GPL, that is a flat out falsehood. Open source is not going to die as a result of this case. GPL is indeed legally stable, though it permitted Oracle to lose this case because their lawyers don’t seem to have a good grasp of the implications and use of open source licenses. And GPL software will continue to be developed; after all, many people use one of the biggest pieces of GPL software directly or indirectly at some point everyday: the Linux operating system which is at the heart of a great number of servers on the internet, providing web content, e-mail, sound and video, and even just plain text. If you’re an Android user, you use Linux everyday because Android was built on top of Linux. So in yet another way, Google is in compliance with the GPL. So what is Oracle complaining about? Simple, they’ve run out of ways to stab at Google in their efforts to bring down this giant.

Now, why do I hate GPL? It’s that viral nature. As a programmer, I have a very real grip on how much effort goes into doing even small tasks within computers, and I have a huge appreciation for being able to build on the work of others. In many cases, there’s no need to worry about the shoulders on which I’m standing to accomplish my goals. Many of the C/C++ libraries and SDKs are either considered standards, meaning they’re present in just about every compiler and/or SDK, at no charge, or their one of the even more lenient open source licenses such as the MIT or BSD license. Hell, some things are just plain public domain, meaning there’s no license at all and anyone is free to do anything they want with the code. But so often, a very useful or critical library you need is GPL. That isn’t so bad if what you’re doing is also going to be licensed under the GPL, but if you want to sell your product and not share your secrets, you have to back off of that GPL and find another library, if there is one, that will suit your needs. Maybe you’ll get lucky and find a commercial license for the same library and can afford it. Maybe another implementation is LGPL (Lesser GNU Public License) which allows you to link to the library without automatically making your code GPL. Whatever the case may be, that GPL can haunt you. God forbid that you use GPL code or library in your work unwittingly; anyone that makes the discovery can immediately demand that you reveal all of your hard work even if the one bit of GPL code is something completely innocuous and arbitrary. That hardly seems fair to me, and it’s the bane of the small, independent developer that can’t afford to buy a commercial license for every library they need to use.

So yes, I want GPL to die in a fire, but I at least have a reason for it to do so. Google didn’t break GPL, despite what Oracle claims, and clearly Oracle’s lawyers need to come to a better understanding of open source licenses before they appeal this case.

So… you want to know how to upload a file via AJAX using jQuery and HTML5? Well, here’s how I’m doing it. This is not polished code, but something I cobbled together after visiting a number of sites and reading documentation in two books I have. I’m not going to provide the complete example or the PHP/server side of things, but this should be useful nonetheless.

Starting with HTML, I used the standard file input type with an onchange handler like so:

<input type="file" onchange="uploadIt(this.files);" name="uploadfile" id="uploadfile">
<div id="progressbox" >
<div id="progressbar"></div >
<div id="statustxt">0%</div>
</div>
<div id="output"</div>

The name of the field is completely arbitrary, and ultimately doesn’t matter because this field will not be submitted with your form, if you even have a form. If you do have a form, use an onsubmit() handler or your validation check to disable this field so the file isn’t uploaded a second time. By using the onchange handler, as soon as a file is chosen with the file dialog box, the upload process begins. No extra clicks are necessary. The progressbar and statustxt divisions (div) are used to indicate the upload progress, and the output division can be used for whatever output you want; while developing this code, I used it to report error messages and final status from the server.

The uploadIt function looks like this:

function uploadIt(files)
{
var file = files[0];
 switch(file.type)
 {
 case 'image/png': 
 case 'image/gif': 
 case 'image/jpeg': 
 case 'image/pjpeg':
 case 'image/webp':
 {
 $('#statustxt').html('0%');
 var fd = new FormData();
 fd.append('uploadFile',file,file.name);
 $.ajax({
 url: '/upload-handler',
 type: 'POST',
 cache: false,
 contentType: false,
 processData: false,
 data: fd,
 success: function(data, textStatus, jqXHR)
 {
 if( typeof data.error == 'undefined')
 {
 $('#output').html('<b>success!:</b> ' +data);
 }
 },
 error: function(jqXHR,textStatus,errorThrown)
 {
 $('#output').html('<b>error:</b> ' + errorThrown);
 },
 xhr: function() {
 var myXhr = $.ajaxSettings.xhr();
 if(myXhr.upload) {
 myXhr.upload.addEventListener('progress',function(event){
 var percentComplete = (event.loaded / event.total) * 100.0;
 $('#progressbox').show();
 $('#progressbar').width(percentComplete.toFixed(2) + '%') //update progressbar percent complete
 $('#statustxt').html(percentComplete.toFixed(2) + '%'); //update status text
 if(percentComplete>50)
 {
 $('#statustxt').css('color','#000'); //change status text to white after 50%
 }
 },false);
 }
 return myXhr;
 }
 });
 }
 }}

In my particular implementation, I only wanted certain file types (png, gif, jpeg, and webp as determined by the browser) to be uploaded to the server. You could also check to make sure the file is greater than 0 bytes and less than an arbitrary limit here, but I don’t have that implemented here.

Every time we have a file of the corresponding type selected, the statustxt content is set to “0%”, then we create the FormData object that will do the important job of properly formatting the file for upload. In the fd.append() call, the first parameter is the field name as it will be received on the server; this is the reason why it’s not important to have the field name set in the input tag itself. Naturally, you could copy that or use the same one, but you could just as easily set it here or make this a variable set by jQuery through another AJAX call. In PHP, this is the field name that will appear in your $_FILES variable, so you will need to know what to look for.

Next we have the file variable. While the function received an argument called “files“, and you can handle multiple files through this functionality, I wanted to maintain clarity here, and only took the first file in the files array to process and upload. Looping through the contents of files would be trivial to get this to work for multiple uploads.

The last field in fd.append() is the file name as it existed on disk at the time the file was chosen. Using this field triggers FormData to use the Content-Disposition header which, from my experience, makes the upload possible.

Next, we use jQuery to initiate the upload via the ajax function. This is where things get ugly fast, but the bottom line is you can pretty much copy and paste what I have here. But for clarity’s sake, here’s what’s what:

url is the URL to which the file will be uploaded.

type is going to be POST, though you could also use PUT in theory. There’s no reason to use PUT, but you theoretically could.

cache just as a precaution, lets make sure the browser doesn’t cache anything.

contentType is going to get set by FormData a little bit lower, so don’t manually set it. Do not confuse this with the file’s actual type or MIME type! This will get set to something akin to “multipart/form-data”.

processData we’re not going to process the data locally, so move on.

data is the FormData object, fd, that contains our file.

Next up we have 3 anonymous functions that override the success, error, and xhr handlers in the jQuery ajax call. If you don’t care about displaying progress information as the file gets uploaded, you can omit the xhr handler, though you should keep success and error so your application will know if the upload succeeded or not.

The success and error handlers here are fairly simple, they really just give you an idea of what’s happened. The success function is called if there were no errors uploading the file on the client/browser side. I repeat, the success function is called if there were no errors uploading the file on the client/browser side of things. This means that as far as the browser knows, everything went smoothly, but it’s still entirely possible that there was some error on the server, such as the server being out of space, a virus scanner might have decided the file was malware, or even just a configuration problem. Personally, I beat my head against a server side problem for a couple of hours when I couldn’t figure out why files larger than 2 MB were failing even though I had PHP configured to accept files as large as 50 MB. (Spoiler: there was a typo in a different setting in my php.ini file, which caused PHP’s ini parser to stop and use default values..)

The error function is called if there was a javascript or browser error. As stated above, this does not handle errors on the server.

The xhr handler, as presented here, is responsible for updating the progress divisions, and thus the user. xhr is shorthand for XMLHttpRequest, and we need to send jQuery a lightly prepared version of the object prior to letting it do its thing, so we’re overriding its internal instantiator for the XMLHttpRequest object. First things first, we get the copy of XMLHttpRequest this particular call is going to use through the $.ajaxSettings.xhr() call. After making sure it has the upload member, we add an event listener with another anonymous function to handle the “progress” events. With the event argument, we calculate the completion percentage percentComplete, and use that to update the status indicators.

Assuming nothing has fundamentally changed by the time you read this, your script should be able to upload a file via AJAX using jQuery and HTML5 without submitting the whole form. (Assuming you bother with one at all.)

This is a brief summary of the current development status of Nuubz.

  • All development is being done using Apache, PHP 7, PostgreSQL 9.5.x, jQuery, and Bootstrap on Linux. The software, once feature complete, will be adapted to include MySQL/MariaDB support.
  • Native account creation and login is fully functional.
  • OAuth2 support is approximately 60% implemented from scratch based on RFC 6749; obtaining access token from Google and Facebook has been tested and works properly, though more work is necessary to obtain user information from both services.
  • Support for separate read/write and read-only databases is implemented; this will allow for a master-slave/server-replicant configuration if the site administrator so desires. This will not, however, transfer files such as the comic image from server to server; a network file system is recommended for that.
  • Support for Google Analytics, reCAPTCHA, Akismet, and Project Honeypot is built in.
  • HTML5 support is the targeted HTML level.
  • Multiple language support for both interface and comic.
  • Microdata support is being implemented in the base theme.
  • Multi-home support is being implemented as well, to allow a single installation of the software to support multiple comics with different domain names. This will only require that the additional domains be parked on the server and point to the same directory.

I’m more than likely forgetting some features that have been, are being or will be implemented, but this makes a good first status report.

If you wish to play around with something, feel free to visit the comment spam tester.

Over the last few months, I’ve been [albeit slowly] working on a new piece of software. Instead of something for Android or a particular operating system, this one is for the web. While browsing my favorite webcomics, I came to realize that many were using WordPress which seemed like a problem to me. While WordPress is a hugely popular and flexible piece of software, it’s really overkill for webcomics. So, I started developing my own dedicated piece of software that I’m calling Nuubz.

It’s still relatively early in development, but if you want to watch on progress, you can visit http://dev.nuubz.com to take a look. Be sure to check back regularly as I make progress.

It’s been a few years since I last posted here, and I’ve even pulled my apps from the Play Store. The reasons are deceptively simple: I had a job that made it difficult for me to want to spend time at or near a computer before or after business hours. So my apps and my development skills stagnated; so much so that I didn’t fully fix the problems in Sylence or really update it for use with Kit Kat or Lollipop. In fact, once Lollipop was released, it supplanted some aspects of Sylence. Once Google decided developer addresses had to be a part of their profiles in the Play Store (whether that was public or not), I just went on and pulled them.

But things have changed.

I’m no longer with that employer (in fact I’m not with any employer at this time) and I’m working on sharpening up my skills again. This means resurrecting a project I started 20 years ago and have rewritten countless times without releasing it. This also means revamping Sylence for the more modern era. I’m also trying to learn Android Studio since that’s the way of the future in terms of Android development.

So, stay tuned…!

Oh, and for the record, the project that I had promised as “coming soon” a few years back did actually arrive and get pulled almost right away. It was a live wall paper with a kind of neat idea, but the rug got pulled from underneath it. I have an idea of how to revamp it, but it’s on the back burner for now.

 

Thanks to newly found free time, the latest Android SDK updates, and the desire to finally finish this app, a little info-utility that I started working on about a year ago is now nearly complete. It’s almost certainly not going to make me any money, but I wanted to do it nonetheless since it’s useful to me. When I have more of it done, possibly on the day I decide to release it, I’ll actually let you know what it is… 😉

THEN I’ll be spending a lot more time on Themis…

I’ve spent most of the last weekend working on Sylence, and I’ve made a significant amount of progress. After doing some reading, I see no reason to use the Android 3.0/4.0 fragments tech in Sylence, and while I’m debating the idea of developing an application widget, I have definitely made some positive changes.

First and foremost, the biggest change so far is that in the dialog to create a new silence alarm, the date and time pickers aren’t initially visible. You will see the current date and time, and a date and time 45 minutes later. Tap the date, and you will be presented with the date picker. Similarly, when you tap the time, you will be presented with the time picker. I think this looks and works better than the old system, though it may be a little less obvious.

Also, the horizontal scrolling is gone in that dialog; the day of the week options for recurring alarms are now stacked vertically when recurring is selected, so when editing an alarm, you can immediately see which days are selected and which are not.

The next major change, although completely invisible, is the way that Sylence does the important work of checking to see if the phone should be silenced or not. Until now, a service has been running full time, sleeping for roughly one minute then waking up and doing a check before going back to sleep. As of the new version, the app will use Android’s AlarmManager to do this, which may save more power than the old way. (Note that the old way didn’t use very much power, but this way should use even less.) The caveat, however, is that in order to do its work properly and timely, the AlarmManager needs to obtain a partial wake lock to wake up the processor long enough to do its work; otherwise, the alarms won’t be triggered until the device is woken up by another program or by user actions. In order to obtain a partial wake lock, I had to add another new permission, WAKE_LOCK, to the list of permissions used by Sylence. Without it, I’d either have to go back to the old method or Sylence would only operate on its schedule while the phone was in use.

Another down side is that I’m adding more advertising to Sylence. I’m considering adding a “Pro” version back into the Android Market without the advertising, but I haven’t decided at this time.

I still haven’t gotten much in the way of feedback on Sylence at all, so I’m just doing things as I see fit, No ratings in the Market, no comments, no bug reports, no feedback. If you don’t like how Sylence is progressing, let me know. It’s the only way I can improve it, and right now is the best time to do that. There are still some changes I want to make before I release this new version, but if you want something in it, now’s the time to let me know!

Update 12/29/2011 2:59 am: Sylence 1.5 has been uploaded to the Android Market with new screenshots.