## Friday, December 30, 2016

### rJava: The Gift That Keeps On Giving

I've written not once but twice before (in 2011 and 2015) about the hassles of getting the rJava package to work with R. Every time I think I have a fix for it, someone changes something somewhere and the previous fix no longer works. I had to reinstall rJava today (from the Canonical repositories) after a system upgrade on PC. So, naturally it declined to work.

The symptoms, in both R terminal sessions and when running RStudio, were the same as in 2015. Executing library(rJava) in an R session earned me an error message containing the same nugget
libjvm.so: cannot open shared object file: No such file or directory
as before. Executing Sys.getenv(c("JAVA_HOME", "LD_LIBRARY_PATH")) showed me that the proximate cause was the same as in 2015: the library path was missing the "/jre" piece near its middle. So, same problem implies same cure, right? Wrong! The paths in /usr/lib/R/etc/ldpaths were correct; they just were not working as expected.

The key lines in  /usr/lib/R/etc/ldpaths were as follows.
: ${JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre} :${R_JAVA_LD_LIBRARY_PATH=${JAVA_HOME}/lib/amd64/server}  They look correct, but apparently they are only executed if the corresponding environment variables (JAVA_HOME and R_JAVA_LD_LIBRARY_PATH) are not already defined. I suspect this is due to the colon at the start of each line, but I'm no expert on BASH syntax. You can test this in a terminal by running something like the following: execute JAVA_HOME="silly" R at the terminal prompt (which temporarily resets the JAVA_HOME environment variable to something, well, silly), and in the R session execute Sys.getenv("LD_LIBRARY_PATH") which prints a path whose Java portion starts with "silly". The source of the predefined value of JAVA_HOME, at least on my system, is /etc/profile.d/jdk.sh, which contains the following lines. export J2SDKDIR=/usr/lib/jvm/java-8-oracle export J2REDIR=/usr/lib/jvm/java-8-oracle/jre export PATH=$PATH:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export DERBY_HOME=/usr/lib/jvm/java-8-oracle/db


One solution would be to append "/jre" to JAVA_HOME in that file (making it identical to J2REDIR), but that has two problems. The first is that whatever installer created /etc/profile.d/jdk.sh is likely to overwrite the change the next time the installer is run. The second, potentially more serious, problem is that JAVA_HOME is presumably being set to its current value for a reason. Changing it might have unforeseen consequences with another program.

So my "solution" (hack) is to modify one line in /usr/lib/R/etc/ldpaths as seen below.
## : ${R_JAVA_LD_LIBRARY_PATH=${JAVA_HOME}/lib/amd64/server}
: ${R_JAVA_LD_LIBRARY_PATH=${JAVA_HOME}/jre/lib/amd64/server}

That's still subject to being overwritten by an installer in the future, but at least it only affects R (and it works).

## Tuesday, December 27, 2016

### Mint Upgrade Woes

As I mentioned a couple of months ago, I upgraded a laptop from Linux Mint 17.3 ("Rebecca") to 18.0 ("Sarah") with minimal difficulties. My laptop serves as a guinea pig for these things. Once I'm sure things work fine on the laptop, I'll consider upgrading my desktop as well.

A few days ago I finally undertook the desktop upgrade. By this time the current Mint version was 18.1 ("Serena"). I was expecting a smooth ride, because I forgot one key difference between laptop and desktop: the desktop has an NVIDIA display card. I can't believe I forgot to take that into account, because I've written before about problems with NVIDIA hardware drivers.

Shortening (somewhat) a long and painful story, the initial upgrade left the system with an unusable display, forcing me to do a fresh install of 18.1 from CD (which, among other things, removed a whole lot of packages that will need to be reinstalled). Even after that, the display was an adventure.

The initial installation gave me a choice between the open-source Nouveau driver and version 304.132 of the NVIDIA "legacy" driver. This is the same version that previously caused major headaches. Last time it cause a black screen. This time, presumably due to updates in some system modules, I actually got the background for my desktop (with the Mint logo) and ... something. At first it showed only the upper left corner of my desktop, but after assorted uninstall/reinstall/desperately-try-something machinations, I got to the point where it showed the entire desktop ... except the start button and bottom panel. Oh, and windows did not have the maximize/minimize/close controls, some windows could be dragged while others could not, I don't think I could resize windows, and for some reason the menu button in Firefox didn't work. Other than that, things were peachy. I tried using the sgfxi script to change driver versions, but there was no joy to be had. Of the subset of listed versions that I tried (there were too many to mess with every one), the older ones were allegedly incompatible with one of the libraries (Xorg, I think) and the newer ones had the same problems (or were incompatible).

So I decided to stick with the Nouveau drivers, which install with Mint and are the default choice. Their behavior was also a bit odd. Depending on where I was coming from (fresh install, switching from NVIDIA to Nouveau, something else), they might recognize my 1920x1080 Samsung monitor correctly or they might think that my only monitor was a laptop display (limited to 640x480 resolution). The latter resulted in a butt-ugly and more or less unusable result. Eventually I got Nouveau working at 1920x1080 and thought I was good to go. In fact, I almost was.

The one deal-breaker with Nouveau came when I clicked links in email messages (using Thunderbird) to open them in the Firefox browser. Seemingly randomly, about 3/4 or so would work, but occasionally one would lock the system solid (no response to keyboard or mouse) with a corrupted display ("white blizzard"). The only recourse was power off/power on. Google searches suggested changing a couple of settings in Firefox regarding hardware acceleration. One setting no longer exists; I toggled the other, but it did no good.

I put up with that for a day or so before deciding that, absent any cure, I would have to find a way to use the NVIDIA driver. It occurred to me that the things it was screwing up (window controls, desktop panel, menu button ...) were things under the purview of the window manager (which was Cinnamon). So I installed MATE alongside Cinnamon and switched to it. Sure enough, I got back the main menu and bottom panel, window controls and so forth. It's been stable for a few days now (with no freezes from clicking links in email messages), so I think I've confirmed my thesis that the legacy NVIDIA driver does not get along with Cinnamon, at least on my machine.

Bottom line: anyone who's getting screen freezes using an NVIDIA driver with Mint and Cinnamon might want to consider trying an alternate desktop environment.

## Monday, December 26, 2016

### A Panel Launcher Menu for MATE

A recent upgrade to my Linux Mint PC forced me to switch the desktop environment from Cinnamon (which I've been using for years) to MATE. For the most part, that was painless, but a few things from my old desktop did not translate well.

I had the icons on my Cinnamon desktop organized in a way that made sense to me (but would baffle anyone else); MATE decided they needed to be alphabetized and snapped to a grid. That will be easy (but tedious) to fix. The trickier part was recreating the mix of indicators and applets I wanted in the panel at the bottom of the screen.

One feature I've gotten used to is having an applet that pops open a customize menu of launchers. There's a really excellent Gnome shell extension named MyLauncher that is easy to add in Cinnamon but apparently not available for MATE. Clicking the MyLauncher panel icon opens a pop-up menu that is fully customizable. Menu entries are text, not icons, and you select what the text says and what it does. As one example, I have an entry reading "Search files" that launches Searchmonkey (which I heartily recommend).

I'm a bit finicky (anal?) about how such a menu should work.
1. I want it to list just selected applications, not every application on the system.
2. I want text prompts, not icons. (If I'm thinking "search for a file", my brain is not translating that to "look for a picture of a monkey".)
3. I want to choose the prompts myself. (Before the morning coffee sinks in, I may not remember that "searchmonkey" is the command I'm looking for.)
4. I want a single list; I don't want to have to wade through categories (as one does in the main Mint menu).
5. Adding/editing the menu needs to be reasonably easy.

Anyway, after a bit of searching I found a solution with which I'm comfortable. It combines two programs. Rofi is something of a Swiss army knife, with many functions, one of which is providing menus (with search-as-you-type functionality). Rofi is available from the Canonical repositories, so I was able to install it using Synaptic. The program that actually produces the menus is Menugen, which is actually a set of BASH scripts. There's not much to installation: you just download it, unzip it and store it someplace.

With Rofi and Menugen installed, I just had to load my old MyLauncher menu into a text editor, massage it into the format Menugen wants, and park it someplace. I made the new menu an executable script that calls menugen to interpret it. I then added to the panel a custom launcher invoking my menu script, and that was that.

In case it helps anyone, here's an abbreviated version of my script:

#!/home/paul/Software/menugen-master/menugen
#begin main
name="Quick Launch"
numbered=false
add_exec "Text editor"  "xed"
add_exec "Synaptic"  "gksudo synaptic"
#end main


## Sunday, December 18, 2016

### Using an MCE Remote with Mythbuntu

As I chronicled in a previous post, I've been using MythMote on my (Android) cell phone to serve as a remote control for my MythTV installation (running on Mythbuntu) when watching recordings  or live TV. For the most part this works fine, but there are little inconveniences associated with it, so I decided it was time buckle down and get one of my (many) remote controls to do the job. As it happens, I have Windows Media Center infrared receiver attached to the machine via USB drive, so the associated remote control was the logical choice.

I took the better part of a day to get it working, so I'll chronicle here the things I learned. Everything here is specific to MythTV running on some version of Linux, and much of it may require adjustment if MythTV is not installed on Mythbuntu, Ubuntu or maybe Linux Mint or Debian.

### Ignore LIRC

As with any of my MythTV adventures, I started by consulting Google. It turns out a lot of the information out there is obsolete if you have a remotely recent operating system. Many (most?) of the posts I found discussed how to work with Linux Infrared Remote Control (LIRC), an add-on package for Linux. The catch is that infrared remote support is baked into recent kernels, so you don't need LIRC unless you want to use certain extended capabilities that it apparently provides. I don't know what they are, and I don't need them.

### One Ring to Control Them All

Get to know the ir-keytable command, which will be your friend through all this.  Running it (with no options) in a terminal told me that my system understood that I had an MCE receiver attached (on /dev/input/event3, as it turns out ... yours may differ). Running

ir-keytable --device /dev/input/event3 --test

then let me test which key events were detected when each button on the remote was pressed.

### BIOS Nonsense

This led to the first "minor" problem: according to ir-keytable, none of the buttons on my remote were registering. The LEDs on the remote and receiver both lit with each button press, so the receiver was "hearing" the remote; it just wasn't being believed by the system. Further consultation with Google led me to this post, which suggested that the problem lies in how some BIOSes support USB 3. The IR receiver was plugged into a USB 2 hub, and I don't have anything attached to the machine via USB 3 (I'm not even sure it has a USB 3 jack), but whatever. I rebooted, got into the BIOS editor, found the setting for USB 3 support (every damn BIOS seems to hide it in a different place, so I can't tell you exactly where to look), and turned it off, then booted back into Mythbuntu. This achieved a partial success: ir-keytable in test mode could see button presses! That's the good news. The bad news is that only a few were interpreted correctly.

### Remapping the Buttons

The next step is to remap the buttons on the remote. The drill is to map buttons to the keys you would use if you were controlling MythTV with a keyboard. An important note here is that I'm only concerned with "playback" controls, not with controlling operating system menus or other applications. Here are some helpful links for that.
• The MythTV wiki Keybindings page lists keyboard keys associated with various operations (e.g., "P" for pause/resume playback). Keep this handy as a reference.
• "HOWTO: Get MCE USB remote working in Ubuntu without using LIRC for MythTV or Kodi (xbmc)" is a blog post with detailed, step-by-step instructions. Use this as your main guide. I will say that I deviated a bit here and there. In particular, the author has you generate the default button mapping and save it directly its final destination. This is the line

sudo ir-keytable --read --device=/dev/input/event10 > /etc/rc_keymaps/myrc6_mce

in his post. I found it much easier to write the file to the desktop, edit it there, experiment until I got it right, and only then move it to /etc/rc_keymaps. (The move requires superuser rights; editing on the desktop does not.)
• "MythTV: Use All Buttons of Your Remote Control - Without LIRC" is another handy post on the same subject. In particular, it has a table at the end of all possible key codes, which you may find useful as a reference (to tell, for instance, that the escape key is KEY_ESC rather than KEY_ESCAPE).

### My Remapping

There was some trial-and-error getting to a mapping I liked, but ultimately I was successful. The "HOWTO" blog post identifies an issue with key codes greater than 255 (those with three digit hexadecimal codes). Apparently X11 does not like them. I found, however, that some of the two digit key codes did not work as specified, either, and had to be remapped to other two digit codes. My remote is an RC6 type (the first of the two black ones shown here -- although I'm in the US, not the UK or AU). Reproduced below are just the key mappings I changed in the "myrc6_mce" file.

scancode 0x800f0400 = KEY_0 # was KEY_NUMERIC_0 (0x200)
scancode 0x800f0401 = KEY_1 # was KEY_NUMERIC_1 (0x201)
scancode 0x800f0402 = KEY_2 # was KEY_NUMERIC_2 (0x202)
scancode 0x800f0403 = KEY_3 # was KEY_NUMERIC_3 (0x203)
scancode 0x800f0404 = KEY_4 # was KEY_NUMERIC_4 (0x204)
scancode 0x800f0405 = KEY_5 # was KEY_NUMERIC_5 (0x205)
scancode 0x800f0406 = KEY_6 # was KEY_NUMERIC_6 (0x206)
scancode 0x800f0407 = KEY_7 # was KEY_NUMERIC_7 (0x207)
scancode 0x800f0408 = KEY_8 # was KEY_NUMERIC_8 (0x208)
scancode 0x800f0409 = KEY_9 # was KEY_NUMERIC_9 (0x209)
scancode 0x800f040a = KEY_BACKSPACE # was KEY_DELETE (0x6f)
scancode 0x800f040d = KEY_M # was KEY_MEDIA (0xe2)
scancode 0x800f040e = KEY_F9 # was KEY_MUTE (0x71)
scancode 0x800f040f = KEY_I # was KEY_INFO (0x166)
scancode 0x800f0410 = KEY_F11 # was KEY_VOLUMEUP (0x73)
scancode 0x800f0411 = KEY_F10 # was KEY_VOLUMEDOWN (0x72)
scancode 0x800f0412 = KEY_UP # was KEY_CHANNELUP (0x192)
scancode 0x800f0413 = KEY_DOWN # was KEY_CHANNELDOWN (0x193)
scancode 0x800f0414 = KEY_RIGHT # was KEY_FASTFORWARD (0xd0)
scancode 0x800f0415 = KEY_LEFT # was KEY_REWIND (0xa8)
scancode 0x800f0416 = KEY_P # was KEY_PLAY (0xcf)
scancode 0x800f0418 = KEY_P # was KEY_PAUSE (0x77)
scancode 0x800f0419 = KEY_SPACE # was KEY_STOP (0x80)
scancode 0x800f041a = KEY_END # was KEY_NEXT (0x197)
scancode 0x800f041b = KEY_HOME # was KEY_PREVIOUS (0x19c)
scancode 0x800f0422 = KEY_ENTER # was KEY_OK (0x160)
scancode 0x800f0423 = KEY_ESC # was KEY_EXIT (0xae)
scancode 0x800f0426 = KEY_S # was KEY_EPG (0x16d)
scancode 0x800f046e = KEY_P # was KEY_PLAYPAUSE (0xa4)
scancode 0x800f0481 = KEY_P # was KEY_PLAYPAUSE (0xa4)


### Stuttering

Update 12/28/16: The one issue I discovered with the remote was a tendency for button presses to be counted twice (occasionally, not always). This can be moderately annoying when running through menus or trying to pause/resume and terribly annoying when trying to fast forward or reverse.

Fortunately, the solution is simple, and hinges on the fact that the button presses are being treated as keyboard input. In Mythbuntu's main menu, navigate to Settings > Accessibility > Keyboard > Slow Keys. Select the check box for "Use slow keys" and set the "Acceptance delay" slider to something around 100 ms. The menu location may vary with other distributions, and you may need to tinker with the delay. Caveat: This will also affect typing on your keyboard. If you are a "hunt and peck" typist, the impact may not be too bad. If you are a touch typist, it can get really annoying, possibly more annoying than the remote button stutter.

## Friday, December 2, 2016

### Support for Benders Decomposition in CPLEX

As of version 12.7, CPLEX now has built-in support for Benders decomposition. For details on that (and other changes to CPLEX), I suggest you look at this post on J-F Puget's blog and Xavier Nodet's related slide show. [Update 12/7/16: There is additional information about the Benders support in a presentation by IBM's Andrea Tramontani at the 2016 INFORMS national meeting, "

I've previously posted Java code for a simple example of Benders decomposition (a fixed-cost warehousing/distribution problem). To get a feel for the new features related to Benders, I rewrote that example. The code for the revised example is in this Git repository, and is free for use under a Creative Commons 3.0 license. There is also an issue tracker, if you bump into any bugs.

Going through the code here would be pretty boring, but there are a few bits and pieces I think are worth explaining, and I'll show some far from definitive timing results.

### Modeling Approaches

I have five modeling approaches in the code.
• STANDARD: This is just a single MIP model, using no decomposition. It's present partly for benchmark purposes and partly because a few of the newer approaches build on it.
• MANUAL: This is essentially a reprise of my earlier code. I write two separate models, a master problem (MIP) and a subproblem (LP), just as one would do prior to CPLEX 12.7.
• ANNOTATED: This is the first of three methods exploiting the new features of CPLEX 12.7 I create a single model (by duplicating the STANDARD code) and then annotate it (using the annotatiion method added to CPLEX 12.7) to tell CPLEX how to split it into a master problem and a single subproblem. The annotation method would also let me create multiple disjoint subproblems, but the example we are using only needs one subproblem.
• WORKERS: This is the ANNOTATED method again, but with a parameter setting giving CPLEX the option to split the single subproblem into two or more subproblems if it sees a structure in the subproblem that suggests partitioning is possible.
• AUTOMATIC: Here I create a single model (identical to the STANDARD method) and, via a parameter setting, let CPLEX decide how to split it into a master problem and one or more subproblems.

### Benders Strategy Parameter

The key to all this is a new parameter, whose Java name is IloCplex.Benders.Strategy. It is integer valued, but for some reason is not part of the IloCplex.IntParam class hierarchy (which threw me off at first when I tried to use it in my code). The values defined for it are:
• -1 = OFF: Use standard branch and cut, doing no Benders decomposition (even if annotations are present). Note that this will not stop manually coded Benders from working (as in my MANUAL method); it just stops CPLEX for creating a decomposition. This is the default value.
• 0 = AUTO: If no annotations are present, do standard branch and cut (akin to the OFF value). If the user supplies annotations, set up a Benders decomposition using those annotations, but partition the subproblems further if possible (akin to the WORKERS behavior below). If the user supplies incorrect annotations, throw an exception. This is the default value.
• 1 = USER: Decompose the problem, adhering strictly to the user's annotations (with no additional decomposition of subproblems). If the user fails to annotate the model, or annotates it incorrectly, throw an exception.
• 2 = WORKERS: This is similar to USER, but gives CPLEX permission to partition subproblems into smaller subproblems if it identifies a way to do so.
• 3 = FULL: CPLEX ignores any user annotations and attempts to decompose the model. If either all the variables are integer (no LP subproblem) or none of them are (no MIP master problem), CPLEX throws an exception.
There are also two other Benders-related parameters, in Java IloCplex.Param.Benders.Tolerances.feasibilitycut and IloCplex.Param.Benders.Tolerances.optimalitycut, that set tolerances for feasibility and optimality cuts. (In my code I leave those at default values. If it ain't broke, don't fix it.)

#### Syntax

Coding the strategy parameter looks like the following.
IloCplex cplex;      // declare a model object
// ... build the model ...
int strategy = ...;  // pick the strategy value to use
cplex.setParam(IloCplex.Param.Benders.Strategy, strategy);


### Annotations

Annotations can be specified in a separate file (if you are reading in a model) or added in code (which is what I do in my demo program). IBM apparently intends annotations to be used more generally than just for Benders decomposition. To use them for Benders, what you do is annotate each model variable with index number of the problem into which it should be placed (where problem 0 is the master problem and problems 1, 2, ... are subproblems). Only variables are annotated, not constraints or objective functions. If you fail to annotate a variable, it is given a default annotation (discussed further below). If you assign a negative integer as an annotation, the universe will implode spectacularly, and CPLEX with throw an exception just before it does.

You start by creating a single MIP model, as if you were not going to use Benders decomposition. Assuming again that your model exists in a variable named cplex, you begin the decomposition process with a line like the following:
IloCplex.LongAnnotation benders =
cplex.newLongAnnotation("cpxBendersPartition");
The name "benders" for the variable in which the annotation is stored is arbitrary, but the name cpxBendersPartition for the annotation must be used verbatim. This version of the annotation constructor sets the default value to 0, so that variables lacking annotations are assumed to belong to the master problem. An alternate version of newLongAnnotation uses a second argument to set the default value.

Next, you annotate each variable with the index of the problem into which it should be placed. Suppose that we have two variables defined as follows.
IloIntVar x = cplex.boolVar("Use1");
// open warehouse 1?
IloNumVar y = cplex.numVar("Ship12")
// amount shipped from warehouse 1 to customer 2

Assuming that we want x to belong to the master problem (index 0) and y to belong to the first and only subproblem (index 1), we would add the following code:
cplex.setAnnotation(benders, x, 0);
cplex.setAnnotation(benders, y, 1);
where benders is the variable holding our annotation.

There are versions of setAnnotation that take vector arguments, so that you can annotate a vector of variables in a single call. If the model in question has a fairly small number of integer variables and a rather large number of continuous variables (which is pretty common), you might want to use the two-argument version of newLongAnnotation to set the default value at 1, and then annotate only the integer variables, letting the continuous variables take the default annotation (i.e., be assigned to subproblem 1).

That's all there is to creating a Benders decomposition from a standard MIP model. Note, in particular, that you do not need to create callbacks. CPLEX handles all that internally. You can still add things like lazy constraint callbacks if you have a reason to do so; but for a "typical" Benders decomposition (dare I use that phrase?), you just need to create a single MIP model, annotate it, and set the Benders strategy parameter. What could be easier than that? Well, actually, one thing. If you set the Benders strategy to FULL, you don't even have to annotate the model! You can let CPLEX figure out the decomposition on its own.

### Performance

Test runs of a single model (especially one as simple as the fixed-charge warehouse problem) on a single computer (especially one with a measly four cores) written by a single programmer (who happens not to be terribly good at it) don't prove anything. I'll leave it to someone else to do serious benchmarking. That said, I was curious to see if there were any gross differences in performance, and I also wanted to see if all methods led to correct answers. (I'm not what you would call a trusting soul.) So I ran 25 replications of each method, using a different problem instance (and a different random seed for CPLEX) each time. In an attempt to level the playing field, I forced Java to collect garbage after each model was solved (and timing stopped), to minimize the impact of garbage collection on run times. All problem instances were the same size: 50 warehouses serving 4,000 customers. The basic model has these dimensions: 4,050 rows; 200,050 columns (of which 50 are binary variables, the rest continuous); and 400,050 nonzero constraint coefficients.

The first plot below shows the wall-clock time spent setting up (and, where relevant, decomposing) each model.
None of the methods strikes me as being overly slow, ignoring a couple of outliers. Using annotations (the "Annotated" and "Workers" methods) does seem to impose a nontrivial burden on model construction.

The second plot shows solution times. For each problem instance, all five methods achieved identical objective values (and proved optimality), and all used the same number of warehouses. (I did not check whether the values of the flow variables were identical.) So any concerns I might have had about validity were assuaged.
I'm not sure how much to read into this, but "manual" decomposition (the old fashioned approach, using explicit callbacks) seems to have the highest variance in solution time. The three approaches using new features of CPLEX 12.7 ("Annotated", "Automatic" and "Workers") had very similar run times. I'm fairly certain that the "Workers" method, wherein CPLEX tries to further partition the single subproblem I specified, wound up sticking with a single subproblem (due to the structure of the model).

### Which Is Better?

Assume that you have a problem that is amenable to "standard" Benders decomposition (as opposed to some of the funky variants, such as combinatorial Benders, Benders with MIP subproblems, Benders with subproblems that are not necessarily optimization problems, ...). The easiest approach from the user perspective is clearly the automatic option (Benders strategy = FULL), in which CPLEX literally does all the work. Runner up is the annotation approach, which is both much easier and much less prone to coding errors than the "manual" approach (defining separate problems and writing a lazy constraint callback).

On the performance side, things are a bit less clear. If you dig through Xavier Nodet's slides, you'll see that CPLEX use somewhat sophisticated techniques, based on recent research, to generate Benders cuts. I suspect that, on the test problem, their cuts would be a bit stronger than the ones I generate in the "manual" approach, which may account for some of the difference (in their favor) in median run times seen in the second plot. Also, since they can access the subproblems and add cuts to the master directly, rather than having to go through callbacks, using the annotation feature may result in a bit less "drag".

With other cases, I suspect that if you have a particular insight into the model structure that lets you generate problem-specific Benders cuts that are tighter than the generic cuts, you may do better sticking to the manual approach. Fortunately, since you can turn on automatic Benders decomposition just by tweaking a single parameter, it should be easy to tell whether your cuts really do improve performance.

## Thursday, November 10, 2016

### MIP Models in R with OMPR

A number of R libraries now exist for formulating and solving various types of mathematical programs in R (or formulating them in R and solving them with external solvers). For a comprehensive listing, see the Optimization and Mathematical Programming task view on CRAN. I decided to experiment with Dirk Schumacher’s OMPR package for R. OMPR is a domain specific language for linear and mixed-integer linear programs. OMPR currently relies on the R Optimization Infrastructure package (ROI), which uses a plugin architecture to act as a bridge between R code and third party solvers (including CPLEX). The CPLEX plugin for ROI, in turn, has a dependency on the RCplex package to connect to CPLEX.

Of course, to try out an optimization modeling package you need to have something to optimize. I went back to some ancient research I did, specifically a paper in 1990 on using MIP models to choose linear (or polynomial) discriminant functions for classifying observations into one of two groups. (For the sleep deprived, the full citation is:
Rubin, P. A. Heuristic solution procedures for a mixed-integer programming discriminant model. Managerial and Decision Economics, Vol. 11, No. 4, October 1990.
I used Anderson's iris data for the test case (since it's conveniently available in R, through the datasets package), and just for the halibut I also threw in a support vector machine (SVM) model using the kernlab package for R. Comparing the two is a bit misleading, since SVM models are inherently nonlinear, but I just felt like doing it. In any case, the purpose is to see how OMPR works with CPLEX, not to compare MIP discriminant models and SVMs.

The details are too lengthy for a blog post, so I posted them separately in an R notebook. If you're not familiar with R notebooks, you can find details here. The web page generated by my notebook contains the source code, and there's a control in the upper right of the web page that will let you download it as a notebook (R Markdown) file. You can also grab it from my GitLab repository for the project. As with other content of this blog, it's yours to play with under a Creative Commons license.

The MIP model is as follows. We start with samples of two classes ( "positive" and "negative") containing $p$ features. Let $X_1$ be the $N_1\times p$ matrix of data for the negative sample and let $X_2$ be the $N_2\times p$ matrix of data for the positive sample. The discriminant function we wish to train is $f(x)=w'x+c$; we will classify an observation $x$ as belonging to the positive or negative class according to the sign of $f(x)$. To allow for both limited measurement accuracy of the data and the inevitable adventures with rounding error, we arbitrarily choose some constant $\delta > 0$ and declare a positive observation correctly classified if $f(x)\ge \delta$ and a negative observation correctly classified if $f(x)\le -\delta$.

To avoid the pitfall of having the solver scale $w$ up to some ridiculous magnitude in order to force borderline observations to "look" correctly classified (i.e., to get around the use of $\delta$ as a minimum magnitude for non-zero classification scores), we bound $w$ via its sup norm: $-1 \le w_j \le 1 \, \forall j\in \{1,\dots,p\}$. The constant term $c$ is unbounded and unrestricted in sign.

To detect and count misclassifications (including ambiguous classifications, $-\delta < f(x) < \delta$, we introduce binary variables $y_i,\, i\in\{1,\dots,N_1\}$ for the negative sample and $z_i,\, i\in\{1,\dots,N_2\}$ for the positive sample. Each will take value 0 if the corresponding observation is correctly classified and 1 if not. In the OMPR demo, I just count total misclassifications ($\sum_i y_i + \sum_i z_i$); in general, misclassifications can be weighted to account for prior probabilities, oversampling of one group relative to the other, or relative importance of different types of errors (e.g., false positives looking for cancer tumors are bad, but false negatives can be fatal). There is also a variable $d$ that captures the minimum absolute value of any correct classification score (i.e., how far correct scores are from the boundary value of 0). Larger values are rewarded in the objective function (using a coefficient $\epsilon > 0$).

The model also contains "big M" type constraints to define the binary variables. Formulas for selecting the values of the parameters $M_1$, $M_2$ and $\epsilon$ are given in the paper. So, finally, we get to the MIP model:

\begin{alignat*}{2} & \textrm{minimize} & \mathbf{1}^{\prime}y+\mathbf{1}^{\prime}z-\epsilon d\\ & \textrm{subject to}\quad & X_{1}w+c \cdot \mathbf{1}+d \cdot \mathbf{1}-M_{1} \cdot y & \le 0\\ & & X_{2}w+c \cdot \mathbf{1}-d \cdot \mathbf{1}+M_{2} \cdot z & \ge 0\\ & & w & \le \mathbf{1}\\ & & w & \ge -\mathbf{1}\\ & & c \quad & \textrm{free}\\ & & d \quad & \ge \delta\\ & & y, \: z \quad & \textrm{binary} \end{alignat*}

## Monday, November 7, 2016

### Interactive R Plots with GGPlot2 and Plotly

I refactored a recent Shiny project, using Hadley Wickham's ggplot2 library to produce high quality plots. One particular feature the project requires is the ability to hover over a plot and get information about the nearest point (generally referred to as "hover text" or a "tool tip"). There are multiple ways to turn static ggplots into interactive plots, and I've experimented with a few. I ended up using the R plotly library, which worked well. (The free community version was sufficient for my purposes.) The Plotly site has a short introduction to facilitate getting started. I did bump into a few "wrinkles", and thought I would record how I handled them.

Before getting into details, I'll just mention the obvious: besides installing the libraries, you need to load them in your application with library(ggplot2) and library(plotly). Also, I'll point out the default behavior for the plotly library with regard to hover text: hovering shows a pop-up with the coordinates of the point over which (or close to which) you are hovering. You can optionally add text of your choice to the pop-up.

Partly as a consequence of how variables were named in the data frames being passed to the ggplot() function, I sometimes encountered difficulty getting the coordinates properly labeled. For instance, I might want the coordinates labeled "MPG" and "Residual" but I'd end up with "x" and "y". Eventually I decided the simplest solution was to suppress the display of the coordinates (displaying just text) and then put everything I wanted (coordinate values, coordinate labels, observation labels) into a single, formatted hover text entry. The details of how I did that are the gist of this post.

In the user interface file, displaying a plot is extremely simple. If the plot is going to be passed in output$myplot, the code to display it is just plotlyOutput("myplot"). In some cases, to avoid overly large plots or plots with unappealing aspect ratios, I would display them with plotlyOutput("myplot", height = "auto"). #### server.R In the server file, assume that the plot in question (an instance of class "ggplot") is in a variable p. The code to convert it for display via Plotly with default hover information is just output$myplot <- ggplotly(p). To suppress the default information and just show my custom text, it's output$myplot <- ggplotly(p, tooltip = "text"). When generating a plot with custom text, the text is provided as an "aesthetic", an argument to the aes() function either in the call to ggplot() or in the call to a "geom" being plotted. One of the more complicated instances in my application involves plotting a histogram of residuals from a regression model with a smoothed density curve superimposed. The code for that (in which the residuals are supplied in a data frame df containing a single column named "r") is as follows: df %>% ggplot(mapping = aes(x = r)) + labs(x = "Standardized Residual", y = "Density") + geom_histogram(aes(y = ..density.., text = sprintf("Std.Res.: %.2f<br>Count: %d", x, ..count..) ) ) + geom_density(color = "Navy", aes(text = sprintf("Std.Res: %.2f<br>Density: %.2f", x, ..density..) ) ) + geom_vline(xintercept = 0, color = "DarkGreen")  Note the calls to sprintf() to format the tool tips (with different information for histogram bars and the density curve). Also note the use of the HTML break tag (<br>) to separate lines within the tool tip. #### The inevitable edge case As von Moltke (the Elder) noted, no battle plan survives contact with the enemy ... and no coding scheme handles all cases. One of my plots, a QQ plot of residuals, tripped me up. I wanted to identify each point with its row name. No matter where I put aes(text = ...), it failed to work. So I resorted to an alternative approach. Assume that the QQ plot (an instance of class "ggplot") is in variable p, the source data is in data frame df, and the residuals are in data frame rf, with the graph ending up in output$qplot. The code in the user interface is unchanged. The server code looks like the following:

output$qPlot <- renderPlotly({ z <- plotly_build(p) ix <- sort(rf, index.return = TRUE)$ix
z$x$data[[1]]$text <- rownames(df)[ix] z })  Some explanation is clearly in order. The plotly_build() function turns a plot generated by ggplot() into a list of instructions to the plotly.js Javascript library. That in turn is formatted by the renderPlotly() function into something plotlyOutput() can digest. Within the list plotly_build() created (stored in local variable z), z$x$data[[1]]$text turns out to be a vector of hover text entries, one per point. I replace those with the row names from the data set. (I could use sprintf() to include the coordinates of the points, but for this particular application that wasn't particularly useful.)

Finally, I have to adjust for the fact that, in a QQ plot, observations are plotted in ascending order, not in their original order.  To match the correct row name to each observation, I sort the residuals, store the sort order index vector in ix, and then use that to reorder the row names.

## Friday, November 4, 2016

### Surviving an nVidia Driver Update

Scenario: I'm running Linux Mint 17.3 Rebecca (based on Ubuntu 14.04) on a PC with a GeForce 6150SE nForce 430 graphics card. My desktop environment is Cinnamon. The graphics card is a bit long in the tooth, but it's been running fine with the supported nVidia proprietary driver for quite some time. Unfortunately, having no reason to do so, I had not noted down what version of the driver I had.

Yesterday I installed a couple of "recommended" updates, one of which bumped the nVidia driver to version 304.132. Apparently this driver is "recommended" for people who don't want to see their desktops. On the next boot after the upgrade, I got a black screen. To be clear, there's no problem before the desktop loads -- I can see the BIOS messages at the start of the boot and the grub menu where I get to choose which version of the operating system to boot. It's only when we get to the desktop that the display fails.

A bit of searching showed me that I'm far from the only person experiencing this. What's lacking (at least as of this writing) is a definitive fix. I'll skip the gory details of a day and half of fruitless hacking and cut to the chase scene.

#### Getting to a terminal

The first step was to escape the black hole of my desktop. That was easier said than done. You can't right click on an invisible desktop (or at least if you do it's unproductive). Ditto trying to get to the application menu. Fortunately, control+alt+F2 did manage to kill the (worthless) X session and get me to a terminal. (The display worked fine in terminal mode.)

#### Getting to a desktop

It's a bit like cutting off your leg to get rid of a broken toe, but one way to get out of nVidia Hell is to get nVidia off your system. So in the terminal I ran

sudo apt-get purge nVidia*

(which deleted all nVidia packages) followed by

sudo reboot now

(which did exactly what you would think). With nVidia gone, the system was forced to use the open source "nouveau" driver. Unfortunately, the nouveau driver seemed to be hopelessly confused about what sort of display I had (it called it a "laptop display") and what resolution to use. The result was a largely unusable (but at least mostly visible) desktop.

#### Rolling way, way back

My hope was to roll back to the previous nVidia driver, but that hope was quickly dashed. I was able to run the device manager. (You have two ways to do this, depending on how good or bad the nouveau display is. One is to use the Mint menu to run "Device Manager", if you can. The other is to open a terminal and run "gksudo device-manager".) The device manager listed three possible drivers for me. The first was the multiple-expletives-deleted 304.132 nVidia driver, the second was the nouveau driver, and the third was the version 173.14.39 nVidia driver. So I picked the third, applied the changes and restarted.

This got me a fully functional desktop (at the correct resolution), but performance was less than stellar, as one might expect from that old a driver. There were noticeable lags between some operations (such as clicking the close control on a window) and their results (window actually closing). More importantly, if I suspended the machine, when I tried to resume I could not get the desktop back. So version 173 was not the permanent solution.

#### Rolling back just a little

I've mentioned the sgfxi script before. I tried running it, but it wanted to install the latest supported version, which was that nasty 304.132 thing. After screwing around for way too long, I discovered I could in fact roll back with the script.

The first step was to kill the X server, since sgfxi won't run while X is running. So I used control+alt+F2 to get to a terminal, logged in, and ran

sudo service mdm stop

to get rid of the display manager. That forced me to log in again, after which I ran

sudo su -
sgfxi --advanced-options | less

for two reasons. One was to find the correct option for specifying a particular driver version (it's -o). The other was to get a list of available versions, which appears near the top of the output.

I tried a few of the more recent ones listed, but was told either that they weren't supported (despite appearing in the list) or that some other package was the wrong version to mesh with them. Fortunately, 304.131 could be installed. I assume that was released immediately before the ill-fated 304.132. So once more unto the breach: running (as root)

sgfxi -o 304.131

worked. I was prompted to make a few decisions (one or two of which I simply guessed about), and I got one error message during a cleanup phase, but the script did install the driver and terminated. I rebooted and the system seems to be working normally. It doesn't feel sluggish any more, and returning from a nap is no problem.

The earlier purger and removed the nvidia-settings package, so I used the Synaptic package manager to reinstall version 304 of that. It provides a graphical interface to adjust display settings, although so far the defaults seem to work just fine for me.

Now I just need to be sure never, ever, ever again to update that driver.

## Sunday, October 16, 2016

### Mint Upgrade Problem Solved

I decided to upgrade first my laptop (the canary in the mine shaft), then my desktop (assuming the canary lived), from Linux Mint 17.3 ("Rebecca") to 18.0 ("Sarah"). The "old school" approach would be to download the distribution to a CD/DVD, or create a bootable version on a USB stick, test it and then install it. Mint now provides a somewhat easier option, using a tool named mintupgrade. There's an upgrade tutorial on their web site, but I preferred to use a slightly modified approach suggested by a ZDNet writer, because it copies the voluminous terminal output to text files, making for easier searching (and preserving details that might be useful should a bug report need to be filed).

Unfortunately, earlier in the process I ran into a major hurdle. The first step is to run the Update Manager and verify that all level 1, 2 and 3 updates are installed. I did that, and my laptop appeared to be up to date. After installing mintupgrade, I ran 'mintupgrade check', and was told that in fact my laptop was not up to date, and so the upgrade could not proceed. Unfortunately, the check does not report which packages are not current.

Ignoring Einstein's definition of insanity, I ran Update Manager again (no change) and 'mintupgrade check' again (also no change). Next was a Google search that turned up other people with the same problem but no apparent solutions other than to go the traditional route (download the full installer, etc.). Something I read did cause me to run Update Manager and check whether I had excluded any packages from updating. (I had not.)

Fortunately, while I was mucking around with Update Manager, I decided to check Edit > Preferences > Levels. Lo and behold, the check box for level 3 "Safe updates" in the "Visible?" column was not selected. In other words, Update Manager (following a preference I don't recall ever setting) was not bothering to tell me about level 3 updates. Curiously, on my desktop the box was checked.

At any rate, I checked the box on the laptop, refreshed, and installed the rather lengthy list of level 3 updates of which I'd been unaware. After that, I was able to continue with the upgrade process. (After a warning from the upgrade script, I killed the screen saver, lest I find myself locked out of the user interface midway through the upgrade.) My laptop is not what you would call a gaming system (unless by "gaming" you mean "solitaire"), so I was expecting the upgrade to take some time. So I processed some email on my desktop, played a few (dozen) hands of solitaire, drove to the supermarket to pick up the week's groceries, returned home and stowed them, paid some bills ... and it was still cranking away, with no end in sight.

When it finally finished, and I rebooted, my laptop worked fine. I did notice one problem, though, which I probably should have anticipated. In the Software Sources application, all PPAs had been deleted (as well as anything in "Additional repositories"). Interestingly, the authentication keys for unofficial repositories were intact. Deleting the PPAs makes sense; they tend to be specific to a particular version of a distribution, and so would have to be updated. Still, having to replace them is a bit of a nuisance, and for future reference (meaning when I get around to doing this on my desktop) it will be easier if I make a list of them before upgrading.

## Wednesday, October 12, 2016

### DNS on Home WiFi

Like a lot of people, I have a home WiFi network, and I recently changed my Internet service provider. That required replacing my cable modem. Both the old and new modems provided a DHCP service that doled out local IP addresses to devices connecting to the home WiFi network. Both modems allowed me to designate static addresses for some devices. The old modem also provided a local name lookup service. The new modem does not.

What do I mean by "local name lookup service"? Each of my machines has a "host name", something like "MainPC", "Paul_laptop" etc. With the old modem, when I was logged into one machine and needed to connect remotely to another, I could just use the host name (e.g., working on laptop and connecting to "MainPC"), and the modem would find the correct IP address to use. With the new modem, "MainPC" just comes back as "host not found" or something similar. I can still get one machine to talk to another by typing in the IP address of the destination machine, but that's a PITA.

After some searching around, I found the answer to my problems here. Note that this only applies to Linux-based machines, which all mine are. In short, I needed to make sure the following four packages (and any dependencies) were installed:
• avahi-daemon;
• avahi-utils;
• avahi-dnsconfd; and
• avahi-discover.
For some reason, all my machines had the first two and none had the third and fourth. Now that they are installed, all I have to do is remember to qualify my machine names with the pseudo-domain "local" (e.g., "MainPC.local", "Paul_laptop.local"), and I can connect from one machine to another without fishing around for IP addresses.

I'll end with one last comment. Two of my machines run Linux Mint, and on them the name discovery service worked as soon as I installed the missing packages. One other machine runs a somewhat dated version of Mythbuntu, and that one had to be rebooted after installing the same two packages before the service worked. I'm not sure why, but I also don't really care. All's well that ends well, as some Brit said.

## Monday, September 19, 2016

### Better Estimate, Worse Result

I thought I might use a few graphs to help explain an answer I posted on Quora recently. First, I'll repeat the question here:

In parametric optimization with unknown parameters, does a better estimator of the parameter always lead to better solutions to the actual problem?

Here we’re trying to minimize $f(x,\theta)$ with respect to $x$, but we don’t know the value of $\theta$. We may have several estimators of $\theta$, and we’re comparing them in a mean square error sense (or by some other loss function in general).
One would like to believe that better estimates lead to better answers, but the word "always" is a deal-breaker. Well, with one exception. The answer to the question is (trivially) yes if "better estimator" is interpreted to mean "estimator yielding better objective value". For mean square error or any other likely loss function, though, "better" is going to mean "closer to $\theta$" (in some sense of "closer"), and that's when things fall apart.

I'm going to take a couple of liberties in my answer. First, I'm going to change minimization of $f$ to maximization, just because it makes the pictures a little easier to interpret. Clearly that makes no meaningful difference. Second, I'm going to assume that maximization of $f$ is constrained. I can cobble together a similar but unconstrained counterexample by replacing the constraints with a penalty function (and assuming some bounds on the domain of $\theta$), but the constrained counterexample is (I think) easier to understand.

For my counterexample, I'll take a simple linear program in which the constraints are known with certainty but $\theta$ provides the objective coefficients:

$$\begin{array}{lrcl} \max & \theta_{x}x+\theta_{y}y\\ \text{s.t.} & x+y & \le & 3\\ & x & \le & 2\\ & y & \le & 2\\ & -x & \le & 0\\ & -y & \le & 0 \end{array}$$
I deliberately wrote the sign constraints $x \ge 0, y \ge 0$ "backwards" to fit Figure 1 below.

 Figure 1

The feasible region is the polytope with vertices A, B, C, D and E. The dotted lines are (unit length) normals to the constraint hyperplanes. So $a^{(2)}$ is the coefficient vector (1, 1) of the first constraint, scaled to unit length, $a^{(5)}=(-1, 0)$ is the coefficient vector of the sign restriction on $x$ and so on. (I numbered the normals in clockwise order around the polytope, rather than in the order the constraints are written above.)

The red dashed vector represents a possible realization of the parameter $\theta$. Since we're maximize, $f$ increases in the direction that $\theta$ points, so for this particular realization vertex B is the unique optimum.

Note that $\theta$ in Figure 1 is between $a^{(1)}$ and $a^{2}$. This is not a coincidence. If $\theta$ is parallel to one of the constraint normals $a^{(j)}$, all points on the edge corresponding to that constraint will be optimal. If $\theta$ lies strictly between two constraint normals, the vertex where their edges intersect will be the unique optimum. This is easy to prove (but I won't bother proving it here).

Figure 2 shows this situation in the same two dimensional space. The five constraint normals divide the space into five circular cones, with each cone corresponding to a vertex. If $\theta$ lands in the interior of a cone, that vertex is optimal. If it falls on an edge between two cones, both the corresponding vertices (and all points along the edge between them) are optimal.

 Figure 2
In Figure 2 I've drawn a particular realization of $\theta$ (deliberately not the one from Figure 1), and two estimators $\hat{\theta}$ and $\tilde{\theta}$. Unfortunately, I'm limited to raster image formats and not vector formats in the blog, so the plots get rather fuzzy if you zoom on them. You'll have to take my word for it that $\hat{\theta}$ is the black vector just above the negative $x$ axis and $\tilde{\theta}$ is the black vector just clockwise of the positive $y$ axis. Alternatively, you can click on the image and download it or open it. (The images have transparent backgrounds, so I recommend opening them in your browser, or some other application that will supply a background.)

By design, I've selected them so that $\tilde{\theta}$ is geometrically closer to $\theta$ than $\hat{\theta}$ is. That should make $\tilde{\theta}$ the better estimator of the two in terms of squared error, absolute deviation ($L_1$ error), etc. Unfortunately, using $\tilde{\theta}$ results in vertex B being the unique optimum. Using the true coefficient vector $\theta$, vertex A is optimal, and the seemingly inferior estimator $\hat{\theta}$ also falls in the cone where A is optimal.

Thus using the "inferior" estimator $\hat{\theta}$ gets you the correct answer, albeit with a somewhat incorrect prediction for the value of the objective function. Using the "superior" estimator $\tilde{\theta}$ gets you a flat-out suboptimal solution. In Figure 1, the incorrect solution B looks to be not too far (geometrically) from the correct solution A, but I could easily cook this example so that B is a long way from A, guaranteeing a massively incorrect result.

All that said, I would still gravitate to the more accurate estimator and not sweat the occasional pathological outcome. Sadly, though, there is once again no guarantee that doing the right thing leads to the best result.

## Saturday, September 10, 2016

### Hovering Over Shiny Plots

I'm following up on yesterday's post, "Formatting in a Shiny App". One of the features I added to my Shiny application was the ability to identify a point in a plot by hovering over it. Since I wanted to do this in several different plots, and did not want to reproduce the logic each time, I abstracted part of it out into a function. I thought I'd post the function here, in case it helps anyone else.

Without further ado, here's the function (including Roxygen documentation):


#'
#' Use hover information reported by Shiny to select a caption suitable
#' for the observation closest to the mouse (or no caption if the mouse
#' is too far from the nearest observation). Proximity is measured using
#' the 1-norm, after scaling abscissa and ordinate by the bounds of
#' the plot region.
#'
#' @param hover the hover information reported by Shiny
#' @param x the vector of x values of observations being plotted
#' @param y the vector of y values of observations being plotted
#' @param captions a vector of captions (e.g., row names), of the same
#' length as x and y
#' @param tolerance the maximum distance (in the weighted 1-norm) that
#' a point can be from the mouse and be selected
#'
#' @return if a point is within tolerance of the mouse location, the
#' caption corresponding to the closest point is printed (suitable for
#' capture by renderPrint)
#'

hoverValue <- function(hover, x, y, captions, tolerance = 0.05) {
if (!is.null(hover)) {
x0 <- hover$x # x coordinate in user space y0 <- hover$y # y coordinate in user space
xrange <- hover$domain$right - hover$domain$left
yrange <- hover$domain$top - hover$domain$bottom
# find the index of the observation closest in scaled 1-norm to the mouse
dist <- abs(x0 - x) / xrange + abs(y0 - y) / yrange
i <- which.min(dist)
# return the corresponding index if close enough, else NULL
if (dist[i] < tolerance) {
cat(captions[i])
} else {
cat("...")
}
} else {
cat("...")
}
}


Here's how we apply this. In what follows, anything in CAPS is a totally arbitrary name you pick for an object. Somewhere in the user interface file (ui.R), we display a plot with

renderPlot("XYZ", hover = hoverOpts(id = "XYZH", ...), ...)

where the ellipsis is any additional arguments you need. (See the help entry for shiny::hoverOpts for options it takes.) This tells Shiny to look in output$XYZ for the contents of the plot, and to return information about where the mouse is hovering in input$XYZH. The structure of input$XYZH is a list of lists, and I won't go into all the details here. On the server side, you pass input$XYZH as the first argument to my hoverValue function. The second (x) and third (y) arguments are numeric vectors giving the abscissas and ordinates of the plotted points. The fourth argument (captions) is a text vector of the same length as x and y, giving the caption you want displayed for each point. If x and y are columns from some data frame df, then row.names(df) would be a logical candidate for the captions.

Bear in mind that plotting a graph involves a transformation from what I'll call "user coordinates" (the scales on which the original variables are measured) to "screen coordinates" (something like pixels or millimeters from the edges of the screen area devoted to the plot). The hover data input$XYZH contains, among other things, the location of the point where the mouse is hovering in both "user" and "screen" coordinates. In order to figure out which point is closest to the mouse, we compute the distance (in user coordinates) from the mouse to each of the plotted points. I elected to use the 1-norm, but you could just as easily use the 2-norm or any other norm you like. Because the x and y variables will typically be measured in different units, we need to use a weighted norm. I scaled them by the width and height, respectively, of the plot area in "user" coordinates, obtained from input$XYZH$domain. (There is another sublist of input$XYZH that has the left, right, top and bottom limits in "screen" coordinates, in case you wondered.) The last argument of the function, for which I provided a (very arbitrary) default, is the minimum required proximity to a point. If none of the plot points are this close or closer to the mouse location, we assume that the mouse is hovering in empty space, and return a placeholder string. (I used "...", but feel free to insert your favorite phrase.) If any points do lie at least that close to the mouse, the index of the closest one is selected, and the corresponding caption is coughed up. If input$XYZH is null (which I think it might be if no hovering is going on), the placeholder is again returned. As noted in the comments, the caption is returned by "printing" it (using the cat() function). If this were a script running in a console, the caption would print in the console. Instead, in the server script (server.R) we use output$XYZT <- renderPrint({hoverValue(inputXYZH, ...)}) to capture the "printed" text and pass it to the user interface, and in the user interface (ui.R) we use either textOutput("XYZT", ...) or htmlOutput("XYZT", ...) (depending on whether you want to apply any CSS styling) to show the caption. ## Friday, September 9, 2016 ### Formatting in a Shiny App I've been updating a Shiny (web-based interactive R) application, during the course of which I needed to make a couple of cosmetic fixes. Both proved to be oddly difficult. Extensive use of Google (I think I melted one of their cloud servers) eventually turned up enough clues to get both done. I'm going to record the solutions here, lest I forget them. Unfortunately, I can't give credit where credit is due; in the heat of trying all sorts of fixes (many of which did not work), I neglected to take note of the URLs for the ones that helped. ### Centering Text On the surface, this sounds trivial: just put it inside an HTML "div" or "span" tag, with appropriate style information. In my case, though, I was generating the text dynamically using the verbatimTextOutput() function. More specifically, I was positioning two plots side by side, with dynamic text under each one. In the UI file, this was done using a "fluid row" containing two columns of width 6 each. (I won't explain the spacing convention for fluid rows here. It's enough to know that the available width is 12 units, so this splits the row into two columns of equal size.) One plot, followed by the text, went into each column. According to the help for the column() function, the syntax is column(width, ..., offset = 0) where ... is the list of elements to include in the column. It turns out that column() also understands an alignment parameter, so column(6, align = "center", plotOutput(...), verbatimText(...)) worked just fine for me. That's certainly simple enough, once you know about the align argument. ### Aspect Ratio My other challenge was a bit trickier. I wanted to control the aspect ratio of the two plots. The plotOutput() function has width and height arguments, which take (string) values that can be either a percentage of available space, an integer number of pixels, or "auto". Per the help entry, Note that, for height, using "auto" or "100%" generally will not work as expected, because of how height is computed with HTML/CSS. Truer words have never been written. Width defaults to 100%, which in the context of my two-column fluid row means that each plot scales fill half the width of the available space. That's fine. I want the plots to expand when the user expands the browser window (or goes to full screen). I tried both "100%" and "auto" for the height argument, and in both cases the result was that the plots had zero height (did not display at all). Oops. Ideally, I would like to say something like "height = width" (for a 1:1 aspect ratio) or "height = 0.8 * width" (for a 5 : 4 aspect ratio). Unfortunately, the height is being handled by Javascript, not R code, and the Javascript apparently does not support expressions. So that left the option of computing the height in the server R code, but doing so requires knowing the aspect ratio (a constant) and the width of the plot, which is determined in the UI. Once again, an undocumented argument (or documented in some sacred scroll not available to lay people) comes into play. Let's say that the identifier I use for my plot is "xyz". The UI contains the function call plotOutput("xyz", height = "auto", ...) causing the UI to look for the output to plot in outputxyz. The server script in turn uses

output <- renderPlot(...)

to generate the plot. Here's the code that ended up working:
    output$xyz <- renderPlot(..., height = function() { aspect * session$clientData$output_xyz_width } )  (where "..." is the code generating the actual plot and aspect is a variable containing the target ratio of height to width, 0.8 in my case). The list session$clientData contain stuff used by the Shiny server to keep track of one session versus another. (If you and I are concurrently running the same application, we each have a unique session, which is how Shiny keeps straight which data and results belong to you and which belong to me.) Apparently that list includes a numeric item giving the display width of output\$xyz, with the name mangled to output_xyz_width for reasons unknown to me.

Using this code, the server script computes and sets the correct height for the plot, and apparently "auto" in the UI script tells it to defer to the height computed by the server.

## Tuesday, August 9, 2016

### Some R Resources

(Should I have spelled the last word in the title "ResouRces" or "resouRces"? The R community has a bit of a fascination about capitalizing the letter "r" as often as possible.)

Anyway, getting down to business, I thought I'd post links to a few resources related to the R statistical language/system/ecology that I think may be either not terribly well known or perhaps a bit under-appreciated. As I come across new ones (or see comments rubbing my nose in some glaring omission),  I will try to remember to update the list.

In no particular order, here is what I have so far. If anyone sees a better way to organize the list, feel free to suggest it in the comments. Also, do not assume my level of verbosity for any item is an indicator of my judgment of its importance or utility. It's purely random.
R Bloggers
R Bloggers is pretty much what it sounds like: an aggregator for, per their subtitle, "R news and tutorials contributed by (580) R bloggers". Given the 580 sources, you can safely assume I am not keeping up with all the reading. They provide the usual RSS feeds and such should you wish to subscribe.
RStudio webinars
RStudio is of course famous for the eponymous IDE (which I highly recommend), along with the Shiny package for interactive web documents and various other highly useful software (including server versions). They also produce a serious of high quality webinars. You can get on their email list for advanced warning of dates and topics, or drop by their webinar repository after the fact to replay previous webinars (all of which are recorded) and optionally download supporting materials.
The R Podcast
Beginner's Guide to R
Sharon Machlis wrote this beginner's guide for Computerworld. If you are brand new to R, this is a very good place to start. You might also want to follow Sharon to find other helpful articles relating to R, such as "Great R packages for data import, wrangling & visualization".
Google's R Style Guide
Okay, cards on the table: some people take coding style (and in particular consistency of coding style) quite seriously, and some pay it no attention at all. Also, some people view Google as the center of the digital universe, and some think it is the forerunner to either Big Brother or Skynet (or both). So maybe you will be interested in Google's coding style guide for R, and maybe you won't. At any rate, it's there for the asking.
Gallery of htmlwidgets for R
You can do some really whiz-bang interactive things with R and some form of web publishing (Shiny, RMarkdown, R dashboards, ...). For interactive controls and, particularly, interactive graphics, they generally end up using htmlwidgets, which are Javascript libraries that do much of the interactive stuff in the user's browser, as opposed to on the server. (I hope I got that right.) Anyway, there's a gallery of widgets that you can browse to get ideas for what can be done (and which packages/widgets you need to do them).
Hadley Wickham's Vocabulary page
Hadley Wickham, besides being a data scientist, is one of the most prolific authors of high quality (and highly popular) R packages. His vocabulary page lists some of the commands, functions and packages he considers most essential, grouped by task category. Once you've gotten some experience using R, you might want to consult this page to see if you've missed anything useful to you.
DataScience+
DataScience+ is home to free tutorials (115 as of this writing) relating to R. It should be a go-to location when you are looking to learn more about the uses of R. Tutorials range from the basic (getting started with R) to the somewhat esoteric (random forests, Bayesian regression, ...).
R Packages for Data Access (update 2016-08-13)
This Revolutions blog post contains a list of links to packages that provide tools for accessing online data sources (or simply bundle the data sets themselves).
twotoria (update 2016-08-14)
Anthony Damico has posted a series of two minute video tutorials on many basic to intermediate operations in R. They're well done and (at least to my taste) fairly amusing. Note to non-native English speakers: he keeps them to two minutes by speaking at Warp Factor 5, so good luck.

## Monday, July 25, 2016

### Connecting Bluetooth Headphones

I recently picked up a pair of Bluetooth headphones (Mixcder ShareMe 7) for use with my laptop (which runs Linux Mint). Getting them to connect properly was a bit of an adventure. After I had things (mostly) sorted out, I decided to script the steps necessary to get them working so that I could just double-click a script file and let it do the bulk of the work for me. I thought I'd pass along my script (and some background) here in case it helps anyone else. Note that what follows works on Mint, and probably (maybe? hopefully?) on other Debian-based distributions of Linux. Windows and Mac users may need to look elsewhere.

### Preliminaries

I already had Bluetooth support installed on the laptop. (It did not come with the basic Mint installation.) On top of the drivers, I installed the blueman and bluez-tools packages from the Canonical repositories (using Synaptic). The former provides a nice graphic user interface for managing Bluetooth connections, while the latter provides some handy command line tools.

The first step was to pair the headphones with the laptop. It's fairly easy to do using blueman, and I won't bother with the details here (especially since I've already forgotten them). One thing I will mention is that, during the pairing, I was asked for a PIN code for the headphones. I couldn't find it in the otherwise extensive (and multilingual) manual, though I may have missed it. At any rate, the common default choice for headsets and many other Bluetooth devices worked: 0000.

A couple of other things need to be noted that might be peculiar to my setup. First, I have a command that runs at startup and turns off the Bluetooth service. I do that to conserve battery power, since I don't use Bluetooth all that often on the laptop. Second, my laptop has better than average battery life but isn't exactly a portable supercomputer. I had to stick a pause in the script below due to a timing issue. Whether others would have the same timing issue, and whether they would need the same duration pause, a longer one or a shorter one, is an empirical question. (In other words, caveat emptor.)

### Setup script

I run this script (by double-clicking it, or invoking it in a terminal, depending on my mood) after turning on the headphones:

#!/bin/bash
# Script to connect Mixcder headphones as audio sink.
# Note: turn on the headphones first!
#
# Unblock the Bluetooth adapter (blocked by startup script)
gksudo rfkill unblock bluetooth
# Load the Bluetooth discovery module.
# Pause a few seconds -- attempting to connect immediately fails
# (timing issue?)
sleep 5
# Connect to the headphones.
# (Ignore a "did not receive a reply" error -- seems harmless.)
bt-audio -c "Mixcder ShareMe 7"
# Make sure the headphones use the audio sink (A2DP) profile, not the
# telephony profile (or the turned off "profile")
pactl set-card-profile bluez_card.E8_99_FF_22_76_44 a2dp

• The call to rfkill (which requires superuser privileges) turns Bluetooth back on. You will not need that if you leave Bluetooth on by default.
• The next line loads the Bluetooth discovery module, which is apparently necessary for Bluetooth to find the headphones.
• After that, I have the script take a five second nap to let the discovery module do some discovering. (The default time unit for sleep is seconds.) As I mentioned above, five seconds seems to work for my laptop; your mileage may vary (or you may not need this at all).
• The call to bt-audio connects the headset. Substitute the name of your device (collected in the previous section), in quotes, if you're not using a Mixcder ShareMe 7.
• Finally, the last line sets the profile for the headphones to be a high quality audio sink (meaning the microphone is turned off and the headphones act like stereo speakers). The "a2dp" is what specifies that profile. If you want your headset to act like a telephone (microphone on, lower quality sound), change "a2dp" to "hsp". Note that the "card" name is "bluez_card." followed by the MAC address (which you collected above) with the colons converted to underscores (no idea why).
If you have problems with getting the correct profile name (i.e., neither "a2dp" nor "hsp" does what you want), try running 'pactl list' in a terminal. The section of output labeled "Profiles:" lists the available profiles, giving for each one the name (e.g., "a2dp") followed by a colon and a description of it.