PATCH
/POST
/PUT
requests.
Largely reasons largely for developer convenience, we decided to allow fall back to form-encoded parameters as well (for the time being at least), so we put together a helper method that allows us to handle these in a generic fashion. It looks something like this:
class API < Sinatra::Base
post "/resources" do
params = parse_params
Resource.create(name: params[:name])
201
end
private
def parse_params
if request.content_type == "application/json"
indifferent_params(MultiJson.decode(request.body.read))
request.body.rewind
else
params
end
end
end
By specifying Content-Type: application/json
, JSON-encoded data can be sent to and read by the API:
curl -X POST https://api.example.com/resources \
-d '{"name":"my-resource"}' -H "Content-Type: application/json"
The more traditional method for encoding POSTs is to use the application/x-www-form-urlencoded
MIME type which looks like company=heroku&num_founders=3
and is sent in directly as part of the request body. Rack will decode form-encoded bodies by default and add them to the params
hash, so our API easily falls back to this:
curl -X POST https://api.example.com/resources -d "name=my-resource"
(Note that Curl will send Content-Type: application/x-www-form-urlencoded
by default.)
Good so far, but a side-effect that we hadn’t intended is that our API will also read standard query parameters:
curl -X POST https://api.example.com/resources?name=my-resource
On closer examination of the Rack source code, it’s easy to see that Rack is trying to simplify its users lives by blending all incoming parameters into one giant input hash:
def params
@params ||= self.GET.merge(self.POST)
rescue EOFError
self.GET.dup
end
While not a problem per se, this does widen the available options for use of API to cases beyond what we considered to be reasonable. We cringed to think about seeing technically correct, but somewhat indiscriminate usage examples:
curl -X POST https://api.heroku.com/apps?region=eu -d "name=my-app"
By re-implementing the helper above to ignore params
, the catch-all set of parameters, and instead use request.POST
, which contains only form-encoded input, we an exclude query input:
def parse_params
if request.content_type == "application/json"
indifferent_params(MultiJson.decode(request.body.read))
request.body.rewind
elsif request.form_data?
indifferent_params(request.POST)
else
{}
end
end
As an addendum, it’s worth mentioning that rack-test
also sends application/x-www-form-urlencoded
by default (and always will unless you explicitly override Content-Type
to a non-nil value), and that’s what’s going on when you do this:
it "creates a resource" do
post "/resources", name: "my-resource"
end
We found that it was worthwhile writing our tests to check the primary input path foremost, so most look closer to the following:
it "creates a resource" do
header "Content-Type", "application/json"
post "/resources", MultiJson.encode({ name: "my-resource" })
end
]]>rack-test
is that by default, Sinatra will swallow your errors and spit them out as a big HTML page in the response body. Trying to debug your tests by inspecting an HTML backtrace from last_response.body
is a harrowing experience (take it from someone who’s tried).
The solution is to tell Sinatra to raise errors back to you instead of burying them in HTML. Here’s the proper combination of options to accomplish that:
set :raise_errors, true
set :show_exceptions, false
Here’s a more complete example:
# app.rb
class App < Sinatra::Base
configure do
set :raise_errors, true
set :show_exceptions, false
end
get "/" do
raise "error!"
end
end
# app_test.rb
describe App do
include Rack::Test::Methods
it "shows an error" do
get "/"
end
end
]]>config.ru
) and map it to a particular URL path. This approach works well, but makes a mess in the rackup file and offers little in terms of fine-grained control. More recently I’ve found that I can get better flexibility and clarity by running it from a custom Sinatra module.
Add Sprockets and Yahoo’s YUI compressor to your Gemfile
:
gem "sprockets"
gem "yui-compressor"
# I find it well worth to include CoffeeScript and SASS as well
gem "coffee-script"
gem "sass"
Your assets file structure should look something like this:
+ assets
+ images
- my-jpg.jpg
- my-png.png
+ javascripts
- app.js
- my-scripts.coffee
+ stylesheets
- app.css
- my-styles.sass
app.js
should load all other JavaScript assets in its directory (in the example structure above this will pick up my-scripts.coffee
):
//= require_tree
app.css
as well (includes my-styles.sass
):
//= require_tree
The Sinatra module should look something like this:
class Assets < Sinatra::Base
configure do
set :assets, (Sprockets::Environment.new { |env|
env.append_path(settings.root + "/assets/images")
env.append_path(settings.root + "/assets/javascripts")
env.append_path(settings.root + "/assets/stylesheets")
# compress everything in production
if ENV["RACK_ENV"] == "production"
env.js_compressor = YUI::JavaScriptCompressor.new
env.css_compressor = YUI::CssCompressor.new
end
})
end
get "/assets/app.js" do
content_type("application/javascript")
settings.assets["app.js"]
end
get "/assets/app.css" do
content_type("text/css")
settings.assets["app.css"]
end
%w{jpg png}.each do |format|
get "/assets/:image.#{format}" do |image|
content_type("image/#{format}")
settings.assets["#{image}.#{format}"]
end
end
end
Now use the assets module as middleware in config.ru
, and delegate everything else to your main app:
use Assets
run Sinatra::Application
]]>$stderr
. Even a relatively harmless operation like a restart will result in noise written to your system error log:
executing ["/home/core/.bundle/gems/ruby/1.8/bin/unicorn_rails", "-c", "config/normalized_unicorn.rb"] (in /home/core)
forked child re-executing...
I, [2012-10-17T09:00:35.029145 #12322] INFO -- : inherited addr=/tmp/core.sock fd=4
I, [2012-10-17T09:00:35.029885 #12322] INFO -- : Refreshing Gem list
reaped #<Process::Status: pid=2784,exited(0)> worker=1
reaped #<Process::Status: pid=2785,exited(0)> worker=2
reaped #<Process::Status: pid=2783,exited(0)> worker=0
master complete
master process ready
worker=1 ready
worker=2 ready
worker=0 ready
Simply redefining Unicorn’s logger to one pointing to $stdout
will fix the problem:
# by default, Unicorn will log to $stderr; go to $stdout instead
logger Logger.new($stdout)
]]>The platform’s answer for developers requiring SSL on a custom domain is the use of the SSL Endpoint addon, priced at $20 a month (the dark days of $100/mo. ssl:ip are finally over!). After adding SSL Endpoint to an app, a developer uploads their cert and an endpoint is created with a name like mie-6498.herokussl.com
. He or she then CNAMEs their domain to the endpoint and secure requests are routed through with no app changes necessary.
And just a final bit of background: any given request on the Heroku platform enters through the routing mesh. The tl;dr is that it finds an appropriate runtime where the an app is deployed and forwards its requests through.
In case the $20/mo. per app for a custom domain seems a steep price to pay, you may be happy to find out that in many cases a single SSL Endpoint can be shared between many apps.
Requests coming through an SSL Endpoint follow the same rules as the rest of the platform–a request may enter through an endpoint but from there is routed through the mesh normally. Therefore, it’s not an SSL Endpoint’s associated app that decides where a request goes, but rather the incoming domain that’s been CNAME’d to the endpoint.
A savvy developer can take advantage of this behavior to allow a single SSL Endpoint to route to any number of Heroku apps. For the connection to stay secure, the cert uploaded to the endpoint needs to be signed for any domains that you intended for use for it, but even a free cert from StartCom allows two domains to be included without any special verification. A wildcard certificate (i.e. *.mutelight.org
) will secure an entire stack of apps deployed into the Heroku cloud.
Below is a simple example demonstrating how a single endpoint is shared for both brandur.org and facts.brandur.org:
#
# the app brandur-org below has ssl:endpoint
# the app facts-web does not
#
$ heroku addons -a brandur-org
ssl:endpoint
$ heroku addons -a facts-web
No addons installed
#
# both www.brandur.org (entry point for the app brandur-org) and
# facts.brandur.org (app facts-web) are CNAME'd to mie-6498
#
$ host www.brandur.org
www.brandur.org is an alias for mie-6498.herokussl.com.
$ host facts.brandur.org
facts.brandur.org is an alias for mie-6498.herokussl.com.
#
# both apps get a secure connection because brandur-org's cert includes both
# domains
#
$ heroku certs -a brandur-org
Endpoint Common Name(s) Expires Trusted
---------------------- ----------------------- -------------------- -------
mie-6498.herokussl.com facts.brandur.org, 2013-07-21 03:31 UTC True
www.brandur.org
]]>Inspecting a Postgres configuration file will reveal a setting that specifies the maximum number of connections that its associated service will allow:
max_connections = 20
As with other settings, this can be checked by connecting to any running Postgres and executing the following query:
select name, setting from pg_settings where name = 'max_connections';
Protip: you’ll notice that for all our Postgres services at Heroku, from Dev to Ronin, and all the way to Mecha, the response will be 500
.
Bundle MultiJson in your Gemfile
:
gem "multi_json"
Now define a helper for identifying Curl clients, and use it wherever encoding JSON:
# sample Sinatra app
helpers do
def curl?
!!(request.user_agent =~ /curl/)
end
end
get "/articles" do
articles = Article.all
[200, MultiJson.encode(articles, pretty: curl?)]
end
]]>Backbone relies on the inclusion of jQuery or Zepto in your project to provide the underlying infrastructure for making AJAX calls. If you’re using jQuery, there’s a function called $.ajaxSetup
that will set options before every AJAX call. Use it to set the Authorization
header (warning: CoffeeScript):
$.ajaxSetup
headers:
Authorization: "Basic #{toBase64(":secret-api-password")}"
Under HTTP basic, both the user and password need to be base64 encoded before being sent along to the server. JavaScript doesn’t provide utilities to handle that out of the box, so the toBase64
function above needs to be implemented to get this example running.
A nice option is CryptoJS. Download the package and include the following files in your project:
core.js
enc-base64.js
Now you’re ready to implement toBase64
and complete this example:
toBase64 = (str) ->
words = CryptoJS.enc.Latin1.parse(str)
CryptoJS.enc.Base64.stringify(words)
]]>I was pleasantly surprised to find that in OSX you can now disable caps lock out of the box if you don’t intend to rebind it. This is accomplished via System Preferences --> Keyboard --> Modifier Keys --> Caps Lock Key --> No Action
, and provides a measurable improvement over the system default.
Then I started to wonder whether I could put caps lock to good use by solving another problem caused by Apple’s keyboard design, and the answer turned out to be yes.
Tmux has moved beyond a terminal multiplexing tool and has become one of the most important tools in my kit by acting as the de facto window manager for all my important tools and sessions. As such, I hit my Tmux prefix shortcut C-a
a lot, which is tremendously inconvenient because even in 2012 Apple is still jamming a fn
key onto everything they make so that ctrl
is harder to hit.
Switching to caps lock as a Tmux prefix solves this problem forever. Here’s how to do it:
Download PCKeyboardHack, install, and restart.
From the new System Preferences pane, change the keycode under the Change Caps Lock
entry to 109
(that’s F10
), and check its box.
In your .tmux.conf
, change applicable settings to use F10
:
# thanks to PCKeyboardHack, F10 is caps lock and caps lock is F10
set-option -g prefix F10
# go to last window by hitting caps lock two times in rapid succession
bind-key F10 last-window
A handy tool that we use here regularly is inspecting the CLI’s workflow by telling Excon to send its output to standard out. Try it for yourself:
EXCON_STANDARD_INSTRUMENTOR=true heroku list
Any calls that are implemented via heroku.rb make their requests using Excon, but a few of the older endpoints still use Restclient. If you run into one of these, you can do something very similar:
RESTCLIENT_LOG=stdout heroku drains -a mutelight
]]>406
? A 415
? Here are some plain English explanations:
406 Not acceptable
– In the context of format, when the server can’t (or won’t) respond in the format that the client has requested. This requested format could come in via an Accept
header or an extension in the path.415 Unsupported media type
– when the client has sent content in a request body that the server doesn’t support. This would occur during a POST
or PUT
and may be described by the Content-Type
header.A user on Stack Overflow puts it as succinctly as possible: “406 when you can’t send what they want, and 415 when they send what you don’t want.”.
]]>Despite this attractive selection, I use none of the above. Today, I wanted to share a very simple todo pattern that I’ve been using for months now with great results. Here it is in its entirety:
@todo
=====
* Pick up the milk
* h/Submit TPS report
Finished
--------
* h/Order stationery
Defunct
-------
* Submit talk
# vi: ts=2 sw=2 foldmethod=indent foldlevel=20
It’s that simple:
The list stays open in Vim wrapped in Tmux pane at all times, and gets synced back to Dropbox. Finished items are transferred between lists using fast Vim bindings. If I think of something away from my computer, I add it to my phone, then transfer the task the next time I’m back.
The Vim hints at the end provide some nice folding behavior, which is useful when your finished list has become very long. Open and close individual lists using zo
and zc
respectively (the foldlevel
hint at the end ensures that all lists are expanded when the file is first opened).
~/.heroku
to an older and more normalized storage standard, .netrc
. This isn’t an isolated event either, you may have noticed that GitHub has recently changed the recommended clone method on new repositories to https, which has the side-effect of bypassing your standard access with ~/.ssh/id_rsa
. How do you get back to not being prompted for your credentials every time you push to the repository? Netrc.
.netrc
is an old standard that dates all the way back to the days of FTP, that romantic wild west era of the Internet where the concept of “passive mode” kind of made sense. Its job is to store a user’s credentials for accessing remote machines in a simple and consistent format:
machine brandur.org
login brandur@mutelight.org
password my-very-secure-personal-password
machine mutelight.org
login brandur@mutelight.org
password my-even-secure-password-with-a-number-on-the-end-7
Although originally intended for FTP, its use has since expanded to a other network clients including Git, Curl, and of course Heroku.
A common pattern that I’ve run into while building API’s over the last few months is to protect APIs with HTTP basic authentication. This isn’t necessarily the best solution in the long term, passing tokens provisioned with OAuth2 may be better, but it’s a mechanism that can be set up quickly and easily.
Take this Sinatra app as an example:
# run with:
# gem install sinatra
# ruby -rubygems api.rb
require "sinatra"
set :port, 5000
helpers do
def auth
@auth ||= Rack::Auth::Basic::Request.new(request.env)
end
def auth_credentials
auth.provided? && auth.basic? ? auth.credentials : nil
end
def authorized?
auth_credentials == [ "", "my-secret-api-key" ]
end
def authorized!
halt 401, "Forbidden" unless authorized?
end
end
put "/private" do
authorized!
200
end
After running it, we can test our new API with Curl:
curl -i -u ":my-secret-api-key" -X PUT http://localhost:5000/private
HTTP/1.1 200 OK
X-Frame-Options: sameorigin
X-XSS-Protection: 1; mode=block
Content-Type: text/html;charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: thin 1.3.1 codename Triple Espresso
Now here’s the interesting part. Add the following lines to your .netrc
:
machine localhost
password my-secret-api-key
Try the same Curl command again but using the -n
(for --netrc
) flag:
curl -i -n -X PUT http://localhost:5000/private
Voilà! The speed of being able to run ad-hoc queries against an API you’re building rather than drudging up your API key every time turns out to be a huge win practically, and it’s a pattern that I now use regularly during development.
A limitation that’s hinted at above is that you can only have a single entry for localhost
. Generally, I find that this isn’t a huge problem because most of the APIs I want to hit are deployed in a staging or production environment with a named URL.
Now onto a nice real-world example. Are you a Heroku user? Have you updated your Gem since February 2012? If the answer to both these questions is yes!, try this from a console:
curl -n https://api.heroku.com/apps
A glaring problem with .netrc
is that it keeps a large number of your extremely confidential credentials out in the open in plain text. Presumably, the file is chmod
‘ed to 600
and you’re using full-disk encryption, but that’s still probably not enough (say someone happens to find your computer unlocked).
The netrc gem used by the Heroku client will try to find a GnuPG encrypted file at ~/.netrc.gpg
before falling back to the plain text version. Although this convention is far from a standard, it’s still recommended security practice.
script/rails console
and immediately start running commands from inside our projects. What may not be very well known is that this console isn’t a piece of Rails black magic, and makes a nice pattern that extends well to any other type of non-Rails Ruby project.
Here’s the basic pattern:
#!/usr/bin/env ruby
require "irb"
require "irb/completion" # easy tab completion
# require your libraries + basic initialization
IRB.start
With the right initialization, this will immediately drop you into a console with all your project’s models, classes, and utilities available, and even with tab completion! It also translates easily over to cloud platforms, being only one heroku run bin/console
away, so to speak.
I picked up the idea somewhere at Heroku where public opinion generally sways against heavy Rails-esque frameworks and towards more custom solutions built from the right set of lightweight components.
Here’s a real world example for the bin/console of Hekla, which runs this technical journal:
#!/usr/bin/env ruby
require "irb"
require "irb/completion"
require "bundler/setup"
Bundler.require
$: << "./lib"
require "hekla"
DB = Sequel.connect(Hekla::Config.database_url)
require_relative "../models/article"
# Sinatra actually has a hook on `at_exit` that activates whenever it's
# included. This setting will supress it.
set :run, false
IRB.start
]]>The traditional debugger ruby-debug
has been known to be 1.9 incompatible for some time now, but more recently, its updated version ruby-debug19
is no longer 1.9 compatible having been broken by 1.9.3 without a new release. Luckily, the awesome new debugger
gem stepped in to fill the gap.
Include both debuggers in your Gemfile
with platform conditionals:
group :development, :test do
gem "debugger", "~> 1.1.3", :platforms => [:ruby_19]
gem "ruby-debug", "~> 0.10.4", :platforms => [:ruby_18]
end
I debug pretty often, but don’t like to type a lot, so I usually include a shortcut in my test_helper.rb
to get a debugger invoked quickly regardless of the Ruby version that you’re running:
def d
begin
require "debugger"
rescue LoadError
require "ruby-debug"
end
debugger
end
Now drop it into a file like so:
def requires_frequent_debugging
risky_call rescue nil
Singleton.manipulate_global_state
d # the debugger will start on the next line
Model.do_business_logic
super
end
It might seem like the debugger would start in the d
method rather than where you want to debug, forcing you to finish the stack frame before you could start debugging. Fortunately, that’s not the case. The d
method has returned by the time the debugger is invoked, leaving you exactly where you want to be.
In a classic case of open-source overkill, I’ve extracted the pattern described above into a trivial gem called d2. Throw it in your Gemfile, make sure that your project is either using Bundler.setup
or including require 'd2'
somewhere, then use d2
somewhere to trigger the debugger.
Aside – A slightly interesting Ruby tidbit related to the code above is that we use rescue LoadError
because a generic rescue
only catches StandardError
exceptions. LoadError
is derived from a different hierarchy headed by ScriptError
.
We use Unicorn because of its nice restarting trick that enables deploys with minimal complexity, and with no dropped connections. A side effect of the mechanism Unicorn uses to provide this feature is that the Unicorn master process runs on a single port which accepts connections, then delegates to one of the worker processes running on the box. That single port bound to by the master process makes a very nice target for the ELB, removing the need for a reverse proxy local to the box. One less component in the HTTP stack is one less piece that can fail, and reduces the incumbent knowledge required to properly manage our stack.
The fact that Unicorn was designed with the expectation of being run behind Nginx to buffer incoming requests and handle slower connections (it’s right there on the Unicorn philosophy page) is another discussion, but we generally found that Unicorn runs pretty well on its own for our purposes. That is except when it’s behind an ELB in HTTPS mode, but those findings deserve an article of their own.
Assuming that you want to deploy Unicorn on port 80, the very first challenge you’d run into is that on a typical Linux box, root privileges are required to bind to any ports below 1024. A great way to work around this is by using Authbind, start by installing it via your favorite package manager:
aptitude install authbind
Authbind’s permissions are managed with a special set of files in /etc/authbind
. Create a file telling Authbind that binding to port 80 should be allowed:
touch /etc/authbind/byport/80
Authbind determines that a user is allowed to bind an application to port 80 if they have access to execute this file. Change ownership of the file to the user your web server runs under (assumed to be http
here) and make sure it has executable (x
) permissions. Alternatively, we could accomplish the same thing using groups.
# as root
chown http /etc/authbind/byport/80
chmod 500 /etc/authbind/byport/80
Test the setup using Python’s built-in HTTP server:
# as http user
authbind python -m SimpleHTTPServer 80 # Python 2.x
authbind python -m http.server 80 # Python 3.x
That’s it! Notice that the web server command here should be prefixed by the authbind
command for this to be allowed. Another Authbind invocation worth mentioning is authbind --deep
which enables port binding permissions for the program being executed, as well as any other child programs spawned from it.
Some key features of the new dev plan is that databases under it are 9.1 (up from 8.3 which is what the shared databases ran under), support hstore, and can be managed remotely using heroku pg:psql
or any other Postgres client.
However, since the dev plan adds a brand new database, the default is to end up with an empty store with none of your previous application data. If you’re like me, and not too familiar with Heroku Postgres, it might not be immediately obvious how to seemlessly get your data migrated over. Lucky for you though, you’re on Heroku! Using pgbackups, there’s a very simple way to move your data between databases and produce a backup as a convenient byproduct.
Add the pgbackups
addon and capture a backup of your current shared database:
heroku addons:add pgbackups
heroku pgbackups:capture
The Heroku command will tell you that a backup was produced with a name like b001
. Now add your new Postgres dev database:
heroku addons:add heroku-postgresql:dev
The name of your new database will come back as a token like HEROKU_POSTGRESQL_CYAN
. It’s attached to your app, but not yet acting as its primary database.
Now all that’s left to do is restore the backup you made, and make it your primary:
heroku pgbackups:restore HEROKU_POSTGRESQL_CYAN b001
heroku pg:promote HEROKU_POSTGRESQL_CYAN
Open a psql session to the new Postgres dev instance and check that all your data is properly in place:
heroku pg:psql
Optionally, you can destroy your old shared database:
heroku addons:remove shared-database
]]>Many of us have fond memories of reading of Arthur Dent and Ford Prefect traveling the galaxy accompanied by their towels and depressed robot. Readers will remember the friendly book inscribed with the words Don’t Panic, the story’s namesake and described as “the standard repository for all knowledge and wisdom”. Back in the 90’s the idea of a device with compiled information on nearly everything small enough to carry in your pocket was laughable. The closest thing at the time were encyclopedias spanning entire bookshelves (or CD-ROM’s).
Today we have our own form of the Guide, and it’s better than even Douglas Adams could have imagined: Wikipedia.
Wikipedia is the ultimate travel resource. On a trip across the country it lets you look up everything–from towns and landmarks you’re visiting to fact checking your tour guide.
This would be a world with almost perfect flow of information if not for one thing. In every country on Earth, phone carriers actively encourage technological retrogression by keeping data roaming rates prohibitively expensive–and you’re going to need data for a Wikipedia lookup.
As luck would have it though, there are a few solutions that will compensate for 3G deficiency on international trips. I personally download a 5 GB dump of Wikipedia with AllofWiki, and furthermore use it to browse offline in any country. While traveling Europe last month, I used it to look up the Berlin Wall, Kunsthaus Tacheles, the Lady Moura (anchored in Monaco), the Catacombs of Paris, the TGV, and the English language (good read!) amongst hundreds of other subjects. A few days into the trip, it became an absolutely indispensable resource.
3G Kindles are also a good options with international wireless Wikipedia access (for free). Also try Offline Wiki in HTML5 for a nice notebook solution.
Although we’re not yet having Wiki updates pushed to us via the Sub-Etha, this feels like the future.
]]>Frustrated beginners will claim that Vim invovles nothing but rote memorization–and they’re right, but only on the most basic level. Vim’s far more important feature is enabling its users to manipulate code on a large scale by building actions from the editor’s primitive building blocks. Thinking of these actions as phrases built from verbs, nouns, and modifiers is a very effective way of illustrating this concept.
]]>startx
script. After getting used to Awesome’s key bindings, and throwing Luakit, Urxvt and Tmux into the mix, I got about as close to an optimized Linux build as I was likely to get.
Everything was perfect … except for one aspect: the clipboard. Its behavior was utterly perplexing: I could select text and middle-click (or Shift-Insert
) it most places I wanted, but I could only copy out of Chromium; while pasting it seemed to only respect text that had been copied from itself. Vim was even worse, even with set clipboard=unnamed
it didn’t seem to play nice with anything else.
This was pretty frustrating–the clipboard’s importance in the everyday workflow really can’t be understated. So what was the problem? To understand, we have to know a little more about the X clipboard.
In X10, cut-buffers were introduced. The concept behind this now obsolete mechanism was that when text was selected in a window, the window owner would copy it to a property of the root window called CUT_BUFFER1
, a universally owned bucket available to every application on the system. General consensus on cut-buffers was that they were the absolute salt of the Earth, so a new system was devised.
Thus selections came about. Rather than applications copying data to a global bucket, they request ownership of a selection. When paste is called from another application, it requests data from the client that currently owns the selection. Aside from being much more versatile and less volatile than cut-buffers, selections can also be faster because no data has to be sent on a copy (only on paste). This is especially advantageous when there’s a slow connection to the X server, but this strength is also a weakness because data made available by an application disappears when it closes.
Three selections are defined in the ICCCM: CLIPBOARD
, PRIMARY
, and SECONDARY
, each of which behaves like a clipboard in its own right:
CLIPBOARD
: traditionally used when text is copied and pasted from the edit menu, or via the Ctrl+C
and Ctrl+V
shortcuts in applications that support them.PRIMARY
: traditionally used when a mouse selection is made, and pasted with middle-click or Shift-Insert
.SECONDARY
: ill-defined secondary selection. Most applications don’t use it.The heart of the problem for me is that I expected the X clipboard to behave like the clipboard on Windows or Mac OS X, but in fact X’s architecture is fundamentally different with two separate, yet equally important, clipboards in use.
Naturally, I had to know how Vim interacts with the X clipboard and was pleased to discover that it has some really great documentation on the subject (see for yourself with :help x11-selection
). When running a GUI or X11-aware version of Vim, it has two registers that interact with X:
*
(as in "*yy
): is the PRIMARY
selection. :set clipboard=unnamed
aliases it to the unnamed register.+
: is the CLIPBOARD
selection. :set clipboard=unnamedplus
aliases it to the unnamed register.Vim does not interact with the SECONDARY
selection.
I’m a Linux person at heart, but for me the two equal and separate selections remain an unfortunate usability problem. Luckily for anyone with the same disposition, Autocutsel can help make X’s behavior more logical and intuitive. It’s a great little program that synchronizes the cut-buffer with CLIPBOARD
, or both the cut-buffer and CLIPBOARD
with PRIMARY
as well.
Install Autocutsel (pacman -S autocutsel
on Arch) and put the following two lines into your .xinitrc
(or just run them from a terminal to immediately observe the effects):
autocutsel -fork &
autocutsel -selection PRIMARY -fork &
Now, no matter where you copy and paste from, be it Ctrl+C
in Chrome, p
in Vim, or through text selection in X, your clipboard is consistent across the entire system.