Category Archives: Computers & Internet

I uninstalled the Twitter app

Twitter is a sinking ship.

Honestly I’m sick of it. The toxicity. The lies. The biases. The censorship. What started as a fun platform has turned into a daily, waking-hours nightmare.

I remember how I started out on Twitter, back in 2009. At the time, I and one of my friends on MySpace, who was an aspiring model, continued sharing our thoughts on the site. At the time, she wasn’t too sure of her looks and I assured her that she had what it takes to make a good career. And she did make it big time. But she’s since been suspended on Twitter — maybe for showing too much flesh. I won’t mention her name for obvious reasons.

Twitter has not been a very positive experience for me in 2020. The role it has played in silencing valid dissenting medical opinion on the COVID-19 response is what I found most repulsive. I am particularly offended by their censorship of tweets about valid research that do not fit a certain narrative.

The deliberate suppression of tweets on damning information on one of the U.S. presidential candidates is also unforgivable.

Frankly, I’m done. I’ve decided to pull back, first by removing the Twitter mobile app. I will remain active on the platform but on a more impersonal note. I don’t think the site can survive too long anyway. There is no trust anymore and even the beneficiaries of its antics know this.

I remember how we used to complain about porn and terror on Twitter. Nothing much was ever done about it – basically it boiled down to free speech and we just decided to live with it. “Face your tribe and ignore the stuff you don’t like” was the approach we followed. Nowadays, the woke brigade at Twitter will flag a tweet that says “only females can have cervical cancer”. Balderdash!

For me it’s time to scale down. Thank God I don’t have a million followers, so it’s going to be easy to disappear altogether, soon.

2 Comments

Filed under Computers & Internet

Why you need your own space on the Web

The other day I encountered a tweet by a well-known Norwegian C++ programmer, Patricia Aas on Net Neutrality and we exchanged a few mentions regarding that very topic. She was trying to make a case for the use of web browsers over apps, citing serial abuse by app owners.

These days, social media platforms seem to have come to a head. The era of innocent social media fun and banter seems to be over. The violation of this innocence probably began with the Arab Spring, when Twitter rose to prominence as a powerful tool for political activism. During the rise of ISIL, and then ISIS, merchants of death began to make audacious appearances on the same site. Then came the US 2016 elections and the fallout from the Cambridge Analytica affair, which resulted in much hue and cry about election interference.

Social media is toast. All the same, we still flock to the watering holes, most of us oblivious of the crocodiles that lay in wait (or perhaps many are just too thirsty to even care).

Nowadays social media giants wield a lot of power, power that they derive from other people’s data. I am not saying that they are doing any thing wrong. I am convinced that the onus lies on the users not to do themselves a disservice when they totally trust and depend on these companies. It seems we are in a time where people simply refuse to accept responsibility for they own lives.

And this is why many have lost out. How many, like me, lost all their data after MySpace was sold? What about all the time invested in Google Plus? How many remember how Facebook vowed not to mash up WhatsApp data with that of the parent company, only to break that promise a few years later? Some people get banned or suspended for their harmless political views, simply because some “fact checker” at a company disagrees with them — does something have to be consensual before being accepted as a fact and how bland is social interaction without dissent?

Am I advocating the avoidance of social media sites? Absolutely not. That they have done us all a good service is not in doubt. After all, this post is written on WordPress.com, a social blogging site. What I am saying is that we, the consumers, need to start playing smart with our use of these services. There are 2 simple things I have started to do that could help:

  1. Have a strategy for regularly backing up ALL your data from social media sites.
  2. Develop your own space. Start off by buying a domain. Host your own website — and it doesn’t have to be anything fancy — and work to mirror your social media content

What do you think?

Photo by Nikita Kachanovsky on Unsplash

2 Comments

Filed under Computers & Internet

Another Excel Horror Story

I was trying to create a list of officially approved Health Maintenance Organisations (HMOs) in Nigeria. After jotting down what data I wanted to collect and creating a schema, I paused to decide on how to initiate the approach. I wanted to first of all have it as a CSV file and then figured that the cheapest way to start would be to be “graphical” about it. I opted to go for MS Excel, since I could easily save the results in the desired format. After all, I’m an Office 365 subscriber, so why not give it a try?

If you know anything about me, you are probably aware of my aversion to Excel. After a long romance, our separation was both violent and traumatic. But today I said to myself that I would not be unduly nasty and give it a shot. I told myself, there is no doubt that Excel is a great application and it’s used my millions with great effect.

I found the website of the National Health Insurance Scheme (NHIS) and the page that lists the HMOs. Good. I could have two windows open, the web page on the left and Excel on the right, plug into some good music and in a few minutes of copy-pasting, I should be able to acquire the data.

After a few minutes — and when I got to the phone numbers — Excel started off with one of our old quarrels. Somehow, we could never get to agree on how to handle phone numbers. First, it turned the numbers into scientific notation. Then I tried to set the input type from “General” to “Text” to allow, leading zeros. Then I had to click on the action prompt to indicate that I didn’t want formatted text. Even though I applied my settings to the columns that were to accept phone numbers, whenever I hit the next row, I had to start all over again. Arrrrrgh!

I now chastised myself for thinking that Excel was a changed person. How stupid I was! So I had to vent…

Sometimes we do silly things but don’t know why. This was one of them. I’m reasonably comfortable with R, and practically kicked myself knowing that with the rvest package, and a little peeping around for HTML tags and/or CSS selectors using the SelectorGadget, I could more efficiently grab the data I so badly needed.

Here’s the code I eventually used to get the job done:

library(rvest)

nhisHtml <- read_html("https://www.nhis.gov.ng/hmo-contacts/")

tableTag <- html_nodes(nhisHtml, "table")
tblElements <- html_table(tableTag)
myDf <- tblElements[[1]]
write.csv(myDf, "data.csv")

What on earth was I thinking to even attempt using Excel for this task?

Leave a comment

Filed under Computers & Internet

Help with installing RQDA

RQDA user interface
The RQDA User Interface

[Update – 25 Nov 2020]: In the last 3-4 days, there has been significant activity on the RQDA GitHub repository, specifically addressing the needed updates to the package. So, it’s expected that very soon, the package will once again be available for installation via the regular channels.

RQDA is software for computer-aided qualitative data analysis (CAQDAS) and is specifically tailored for use with the R programming language and statistical computing environment. Last year I was privileged to use RQDA in carrying out the data analysis for an assessment involving 4 Nigerian States. It’s a great package, and very user-friendly. I was able to engage a team of non-programmers and after a 2-hour training, they were good to go, giving me great results.

A few months ago, somebody raised an alarm on the package’s GitHub repository. RQDA was gone!

GitHub Issue #38: Package was archived on CRAN
You need to see the comments that followed after!

What followed was a long discussion – many researchers were adversely affected by this development. Fortunately, my project was properly isolated using package management powered by renv and I really had no problems at all. But others were not so fortunate, and some didn’t even know how to start solving the problem. I participated somewhat on the thread to see how I could help out a few people.

You see, what had happened was that some of the dependencies of RQDA on CRAN, the Comprehensive R Archive Network, had been upgraded and the maintainer of RQDA, Prof. Ronggi Huang of Fudan University, China, was yet to upgrade the project accordingly. With the upgrading of R to version 4.0, these packages were all archived on CRAN and could not be installed the regular way i.e. with install.package(). On a good day, installing RQDA already presents some challenges, because of the graphical user interface (GUI) libraries it uses. Now it was impossible, except for advanced R users.

One of the developers on the thread took it upon himself to work on a fork of the project and came up with a good solution. And it worked. RQDA could be downloaded and installed with little or no pain. However, when colleagues asked whether he was going to commit to maintaining the fork or even pushing to CRAN, he declined, and rightly so. Instructions for using his branch can be found here.

Given this scenario, I decided that it would be good to also develop a solution based on the last available CRAN version, even though it was archived. I therefore came up with an R script that can be used both in the shell and within an R session. With this solution, RQDA can be successfully installed from CRAN on the current version of R (v4.0.2), I tried to provide informative messages to guide would-be users in carrying out the required steps – in some cases, there might be a need to stop the script and carry out an intermediary step at the R console. This script has been uploaded here as a GitHub Gist.

To use this script, follow these steps:

  1. Download the script and save it to disk–its name is gwdg-arch.R. Note the location where it is saved.
  2. Navigate to the directory/folder where the file in the shell or in an R session.
  3. Run the script:
    • If in the shell, use Rscript gwdg-arch.R.
    • If in the R console, use source("gwdg-arch.R")
  4. If RGtk2 was successfully installed by the script, it will terminate. You should now go to the R console and run library(RGtk2); this will bring up a dialog, asking you to install Gtk+. Accept it.
  5. After installing Gtk+, run the script again to download and install the other packages, including RQDA.
  6. If the above steps fail, perhaps your system is lacking some extraneous dependency. Run the script in the shell, only this time add the flag --verbose. This will print out more messages to help identify the possible cause of the problem.

Feel free to give me a shout.

Leave a comment

Filed under Computers & Internet

Using Your Browser From the Command Line

Howdy!

I know it’s been a while since I posted – being selfish with all the new things I’ve been learning. I’m sorry. Today I was reminded in strong terms that sharing and giving are crucial, and without all the good stuff other people are posting on the internet, I wouldn’t know most of what I know today.

I want to talk about starting your browser from the command line, in this case I’m using Firefox on Windows. The terminal I’m using is Powershell.

For a long time, I got into the habit of starting my browser like this:

start firefox

I can open my favourite social media site from the shell like this:

start firefox twitter.com

Note that I didn’t even have to prepend the URL with http(s):// or www! Neat, eh?

Sometimes, when I’m really being lazy and I quickly want to jump to Google and conduct a search on “firefox command line options” from there, I just type

start firefox www.google.com/search?q=firefox+command+line+options

I know that this example is rather contrived, but if you understand the basics of HTTP/HTTPS and query strings, this should be easy to grasp.

Having done this for a while, today I decided to look at the Mozilla Developer Network (MDN) reference to see what Firefox had to offer by way of command line options.

And BOOM I hit a mother lode! So far I have only skimmed over it, but I’m astounded at the possibilities I see – this should really make for good browser automation. I wonder why I never thought of it before now.

If I find anything really useful I promise to share (this time). If you’re interested, have a look at MDN’s Firefox Command Line Options page.

I’m out!

1 Comment

Filed under Computers & Internet