Category Archives: Computers & Internet

RQDAassist v.0.3.1

This is to announce a new version of the R package RQDAassist, a package whose goal is to make working with RQDA much easier.

This version principally adds new functionality in the retrieval of codings from a project database. The function takes as arguments the file path to an RQDA project and a string containing a valid SQL query (SQLite flavour). As a default, one does not need to specify the query. The function does this internally to fetch data from relevant tables in the .rqda file. Thus, for a project MyProject.rqda, one can simply call

retrieve_codingtable("path/to/MyProject.rqda")

The default query that is run internally by this function is as follows:

SELECT treecode.cid AS cid, codecat.name AS codecat
FROM treecode, codecat
WHERE treecode.catid=codecat.catid AND codecat.status=1;

The user is at liberty to form their own queries; a reference for the database tables is in the RQDA package and the documentation for this function (accessed with ?retrieve_codingtable) provides a quick link to that help page. For example, if we want to just collect the filenames of the transcripts used in an analysis, we can use a different query. Note that the data are returned invisibly, to prevent cluttering of the console, so it’s better to bind it to a variable.

qry <- "SELECT DISTINCT name FROM source WHERE status=1;"
tbl <- retrieve_codingtable("path/to/MyProject.rqda", qry)
tbl

We can easily try this out using material from the excellent RQDA workshop conducted by Lindsey Varner and team. We can download the sample project they used right inside R:

url <- "http://comm.eval.org/HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=101e221b-297e-4468-bfc9-8deccb4adf8c&forceDialog=0"
project <- "MyProject.rqda"
download.file(url, project, mode = 'wb')

If we check the working directory with list.files, we should see the project there. Next, using our package’s function, we can get a data frame with information on codings.

> df <- retrieve_codingtable(project)
> str(df)
Classes ‘codingTable’ and 'data.frame':	39 obs. of  9 variables:
 $ rowid       : int  1 2 3 4 6 9 10 12 13 14 ...
 $ cid         : int  1 1 2 1 2 4 4 1 4 3 ...
 $ fid         : int  2 2 2 2 2 2 3 4 4 4 ...
 $ codename    : chr  "Improved Time Management" "Improved Time Management" "Improved Organization" "Improved Time Management" ...
 $ filename    : chr  "AEA2012 - Post-Program Interview1" "AEA2012 - Post-Program Interview1" "AEA2012 - Post-Program Interview1" "AEA2012 - Post-Program Interview1" ...
 $ index1      : num  1314 1398 1688 1087 2920 ...
 $ index2      : num  1397 1687 1765 1175 2964 ...
 $ CodingLength: num  83 289 77 88 44 296 120 150 210 116 ...
 $ codecat     : chr  "Positive Outcomes" "Positive Outcomes" "Positive Outcomes" "Positive Outcomes" ...

We see that we now created a data frame with 9 columns, with interesting data in them. Note particularly the variables codename, filename, and codecat. Let us now carry out the other query we gave as an example – to get the filenames of all the transcripts in the project:

> qry <- "SELECT DISTINCT name FROM source WHERE status=1;"
> tbl <- retrieve_codingtable(project, qry)
> tbl
                                name
1  AEA2012 - Post-Program Interview1
2  AEA2012 - Post-Program Interview2
3  AEA2012 - Post-Program Focus Group
4  AEA2012 - Pre-Program Focus Group

This project contains only 4 active files from which all the codings are derived!

A practical point

This function is useful for developing qualitative codebooks, and particularly when coding is carried out inductively and as has been demonstrated, can be extended to other uses, depending on the kind of data that are retrieved.

Installation

The easiest way to install the package is from an R session with

# install.packages("remotes")
remotes::install_github("BroVic/RQDAassist")

This is a source package, and to build it on Wiindows, Rtools needs to have been properly installed.

Leave a comment

Filed under Computers & Internet, Data Scoemce

An R package to help with RQDA

A few weeks aga, I published a package on GitHub, which I called RQDAassist. The package was inspired by a script I wrote to help RQDA users, myself included, to install the package after it was archived on CRAN when R 4.0 arrived on the scene. So, when RQDAassist was first published, that was its only real functionality.

Today, I am releasing a minor update (v. 0.2.0) that has a few functions added. It can now convert transcripts written in Word into plain text files – a desired format for RQDA projects – and it can prepare those test files into objects that can be read, in bulk, into an RQDA database. Another thing I personally needed for my work was the ability to seaarch qualitative codes using R scripts rather than the graphics user interface; so I wrote a search function, which currently works for active RQDA projects.

This package has so far been tested on Windows 10 (x64) but it should work fine on other major platforms (any subequent update will include the relevant tests for Linux and Mac OS).

There are no plans to take this package to CRAN and indeed there should be no need to do so once RQDA installation from that repository is fully restored. But I find the prospect of additional helper functions to be quite useful in my work and hope others do too. I hope to see these functionalities expand over time.

You are welcome to check out this project at the GitHub repository or try it out using the instructions in the README.

Leave a comment

Filed under Computers & Internet

I uninstalled the Twitter app

Twitter is a sinking ship.

Honestly I’m sick of it. The toxicity. The lies. The biases. The censorship. What started as a fun platform has turned into a daily, waking-hours nightmare.

I remember how I started out on Twitter, back in 2009. At the time, I and one of my friends on MySpace, who was an aspiring model, continued sharing our thoughts on the site. At the time, she wasn’t too sure of her looks and I assured her that she had what it takes to make a good career. And she did make it big time. But she’s since been suspended on Twitter — maybe for showing too much flesh. I won’t mention her name for obvious reasons.

Twitter has not been a very positive experience for me in 2020. The role it has played in silencing valid dissenting medical opinion on the COVID-19 response is what I found most repulsive. I am particularly offended by their censorship of tweets about valid research that do not fit a certain narrative.

The deliberate suppression of tweets on damning information on one of the U.S. presidential candidates is also unforgivable.

Frankly, I’m done. I’ve decided to pull back, first by removing the Twitter mobile app. I will remain active on the platform but on a more impersonal note. I don’t think the site can survive too long anyway. There is no trust anymore and even the beneficiaries of its antics know this.

I remember how we used to complain about porn and terror on Twitter. Nothing much was ever done about it – basically it boiled down to free speech and we just decided to live with it. “Face your tribe and ignore the stuff you don’t like” was the approach we followed. Nowadays, the woke brigade at Twitter will flag a tweet that says “only females can have cervical cancer”. Balderdash!

For me it’s time to scale down. Thank God I don’t have a million followers, so it’s going to be easy to disappear altogether, soon.

2 Comments

Filed under Computers & Internet

Why you need your own space on the Web

The other day I encountered a tweet by a well-known Norwegian C++ programmer, Patricia Aas on Net Neutrality and we exchanged a few mentions regarding that very topic. She was trying to make a case for the use of web browsers over apps, citing serial abuse by app owners.

These days, social media platforms seem to have come to a head. The era of innocent social media fun and banter seems to be over. The violation of this innocence probably began with the Arab Spring, when Twitter rose to prominence as a powerful tool for political activism. During the rise of ISIL, and then ISIS, merchants of death began to make audacious appearances on the same site. Then came the US 2016 elections and the fallout from the Cambridge Analytica affair, which resulted in much hue and cry about election interference.

Social media is toast. All the same, we still flock to the watering holes, most of us oblivious of the crocodiles that lay in wait (or perhaps many are just too thirsty to even care).

Nowadays social media giants wield a lot of power, power that they derive from other people’s data. I am not saying that they are doing any thing wrong. I am convinced that the onus lies on the users not to do themselves a disservice when they totally trust and depend on these companies. It seems we are in a time where people simply refuse to accept responsibility for they own lives.

And this is why many have lost out. How many, like me, lost all their data after MySpace was sold? What about all the time invested in Google Plus? How many remember how Facebook vowed not to mash up WhatsApp data with that of the parent company, only to break that promise a few years later? Some people get banned or suspended for their harmless political views, simply because some “fact checker” at a company disagrees with them — does something have to be consensual before being accepted as a fact and how bland is social interaction without dissent?

Am I advocating the avoidance of social media sites? Absolutely not. That they have done us all a good service is not in doubt. After all, this post is written on WordPress.com, a social blogging site. What I am saying is that we, the consumers, need to start playing smart with our use of these services. There are 2 simple things I have started to do that could help:

  1. Have a strategy for regularly backing up ALL your data from social media sites.
  2. Develop your own space. Start off by buying a domain. Host your own website — and it doesn’t have to be anything fancy — and work to mirror your social media content

What do you think?

Photo by Nikita Kachanovsky on Unsplash

2 Comments

Filed under Computers & Internet

Another Excel Horror Story

I was trying to create a list of officially approved Health Maintenance Organisations (HMOs) in Nigeria. After jotting down what data I wanted to collect and creating a schema, I paused to decide on how to initiate the approach. I wanted to first of all have it as a CSV file and then figured that the cheapest way to start would be to be “graphical” about it. I opted to go for MS Excel, since I could easily save the results in the desired format. After all, I’m an Office 365 subscriber, so why not give it a try?

If you know anything about me, you are probably aware of my aversion to Excel. After a long romance, our separation was both violent and traumatic. But today I said to myself that I would not be unduly nasty and give it a shot. I told myself, there is no doubt that Excel is a great application and it’s used my millions with great effect.

I found the website of the National Health Insurance Scheme (NHIS) and the page that lists the HMOs. Good. I could have two windows open, the web page on the left and Excel on the right, plug into some good music and in a few minutes of copy-pasting, I should be able to acquire the data.

After a few minutes — and when I got to the phone numbers — Excel started off with one of our old quarrels. Somehow, we could never get to agree on how to handle phone numbers. First, it turned the numbers into scientific notation. Then I tried to set the input type from “General” to “Text” to allow, leading zeros. Then I had to click on the action prompt to indicate that I didn’t want formatted text. Even though I applied my settings to the columns that were to accept phone numbers, whenever I hit the next row, I had to start all over again. Arrrrrgh!

I now chastised myself for thinking that Excel was a changed person. How stupid I was! So I had to vent…

Sometimes we do silly things but don’t know why. This was one of them. I’m reasonably comfortable with R, and practically kicked myself knowing that with the rvest package, and a little peeping around for HTML tags and/or CSS selectors using the SelectorGadget, I could more efficiently grab the data I so badly needed.

Here’s the code I eventually used to get the job done:

library(rvest)

nhisHtml <- read_html("https://www.nhis.gov.ng/hmo-contacts/")

tableTag <- html_nodes(nhisHtml, "table")
tblElements <- html_table(tableTag)
myDf <- tblElements[[1]]
write.csv(myDf, "data.csv")

What on earth was I thinking to even attempt using Excel for this task?

Leave a comment

Filed under Computers & Internet