Oh how annoying, just trying to deploy a quick fix for a client.
DEBUG[f3c03d7b] Found old RVM 1.29.4 - updating.
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] Downloading https://get.rvm.io
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] [32mDownloading https://raw.githubusercontent.com/rvm/rvm/master/binscripts/rvm-installer.asc[0m
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] [0m
DEBUG[f3c03d7b] [32mVerifying /home/rails/.rvm/archives/rvm-installer.asc[0m
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] [0m
DEBUG[f3c03d7b] gpg:
DEBUG[f3c03d7b] Signature made Sun 30 Dec 2018 10:44:46 UTC using RSA key ID 39499BDB
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] gpg:
DEBUG[f3c03d7b] Can't check signature: public key not found
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] [31mWarning, RVM 1.26.0 introduces signed releases and automated check of signatures when GPG software found. Assuming you trust Michal Papis import the mpapis public key (downloading the signatures).
DEBUG[f3c03d7b]
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] GPG signature verification failed for '/home/rails/.rvm/archives/rvm-installer' - 'https://raw.githubusercontent.com/rvm/rvm/master/binscripts/rvm-installer.asc'! Try to install GPG v2 and then fetch the public key:
DEBUG[f3c03d7b]
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
DEBUG[f3c03d7b]
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] or if it fails:
DEBUG[f3c03d7b]
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] command curl -sSL https://rvm.io/mpapis.asc | gpg --import -
DEBUG[f3c03d7b]
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] the key can be compared with:
DEBUG[f3c03d7b]
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] https://rvm.io/mpapis.asc
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] https://keybase.io/mpapis
DEBUG[f3c03d7b]
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys from remote server. Please downgrade or upgrade to newer version (if available) or use the second method described above.
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] [0m
DEBUG[f3c03d7b]
DEBUG[f3c03d7b] [0m
DEBUG[f3c03d7b] /home/rails/.rvm/scripts/functions/cli: line 238: return: _ret: numeric argument required
DEBUG[f3c03d7b]
Seems there is a bug in GPG when fetching remote keys. Solution for this Ubuntu server was to just install gnupg2 and install the key again.
rails@fashion1-002:~$ sudo apt-get install gnupg2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
gnupg-agent libassuan0 libksba8 libpth20 pinentry-gtk2
Suggested packages:
gnupg-doc xloadimage pinentry-doc
The following NEW packages will be installed
gnupg-agent gnupg2 libassuan0 libksba8 libpth20 pinentry-gtk2
0 to upgrade, 6 to newly install, 0 to remove and 177 not to upgrade.
Need to get 1,622 kB of archives.
After this operation, 4,115 kB of additional disk space will be used.
Do you want to continue [Y/n]? Y
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main libassuan0 i386 2.0.2-1ubuntu1 [34.2 kB]
Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main libksba8 i386 1.2.0-2ubuntu0.2 [107 kB]
Get:3 http://archive.ubuntu.com/ubuntu/ precise/main libpth20 i386 2.0.7-16ubuntu3 [50.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu/ precise/main pinentry-gtk2 i386 0.8.1-1ubuntu1 [51.4 kB]
Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main gnupg-agent i386 2.0.17-2ubuntu2.12.04.6 [298 kB]
Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main gnupg2 i386 2.0.17-2ubuntu2.12.04.6 [1,082 kB]
Fetched 1,622 kB in 0s (9,323 kB/s)
Selecting previously unselected package libassuan0.
(Reading database ... 101031 files and directories currently installed.)
Unpacking libassuan0 (from .../libassuan0_2.0.2-1ubuntu1_i386.deb) ...
Selecting previously unselected package libksba8.
Unpacking libksba8 (from .../libksba8_1.2.0-2ubuntu0.2_i386.deb) ...
Selecting previously unselected package libpth20.
Unpacking libpth20 (from .../libpth20_2.0.7-16ubuntu3_i386.deb) ...
Selecting previously unselected package pinentry-gtk2.
Unpacking pinentry-gtk2 (from .../pinentry-gtk2_0.8.1-1ubuntu1_i386.deb) ...
Selecting previously unselected package gnupg-agent.
Unpacking gnupg-agent (from .../gnupg-agent_2.0.17-2ubuntu2.12.04.6_i386.deb) ...
Selecting previously unselected package gnupg2.
Unpacking gnupg2 (from .../gnupg2_2.0.17-2ubuntu2.12.04.6_i386.deb) ...
Processing triggers for man-db ...
Setting up libassuan0 (2.0.2-1ubuntu1) ...
Setting up libksba8 (1.2.0-2ubuntu0.2) ...
Setting up libpth20 (2.0.7-16ubuntu3) ...
Setting up pinentry-gtk2 (0.8.1-1ubuntu1) ...
Setting up gnupg-agent (2.0.17-2ubuntu2.12.04.6) ...
Setting up gnupg2 (2.0.17-2ubuntu2.12.04.6) ...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
rails@fashion1-002:~$ gpg2 --recv-keys 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
gpg: requesting key 39499BDB from hkp server keys.gnupg.net
gpg: key 39499BDB: public key "Piotr Kuczynski <piotr.kuczynski@gmail.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
Finally getting to upgrading to Rails 5.1 and there are lots of cases where we use before_* callbacks to block behaviour, most often in before_destroy to prevent a record being deleted when some conditions are met / not met. Previously Rails 5.1 allowed you to just return false in a callback to halt the callback chain. Now you must use throw(:abort).
The behaviour change was turned off by default in Rails 5.0
So you can implement the change sooner by changing that setting in an initializer.
class Thing < ApplicationRecord
before_create do
throw(:abort)
end
before_destroy do
throw(:abort)
end
end
I thought initially this would result in having to cope with exceptions raised from throwing that abort rather than say checking if object.destroy returned false. However, it seems it’s just a clarification of behaviour to avoid accidental blocking.
2.3.0 :006 > Thing.create.persisted?
(0.1ms) BEGIN
(0.1ms) ROLLBACK
=> false
2.3.0 :007 > reload!
Reloading...
=> true
2.3.0 :008 > Thing.create.persisted?
(0.1ms) BEGIN
SQL (0.3ms) INSERT INTO `things` (`created_at`, `updated_at`) VALUES ('2019-01-17 08:38:24', '2019-01-17 08:38:24')
(0.5ms) COMMIT
=> true
2.3.0 :009 > Thing.last.destroy
Thing Load (0.3ms) SELECT `things`.* FROM `things` ORDER BY `things`.`id` DESC LIMIT 1
(0.1ms) BEGIN
(0.1ms) ROLLBACK
=> false
2.3.0 :010 > reload!
Reloading...
=> true
2.3.0 :011 > Thing.create.persisted?
(0.1ms) BEGIN
SQL (0.2ms) INSERT INTO `things` (`created_at`, `updated_at`) VALUES ('2019-01-17 08:39:29', '2019-01-17 08:39:29')
(21.0ms) COMMIT
=> true
2.3.0 :010 > reload!
Reloading...
2.3.0 :014 > Thing.last.destroy
Thing Load (0.1ms) SELECT `things`.* FROM `things` ORDER BY `things`.`id` DESC LIMIT 1
(0.1ms) BEGIN
SQL (0.2ms) DELETE FROM `things` WHERE `things`.`id` = 2
(0.6ms) COMMIT
=> #<Thing id: 2, created_at: "2019-01-17 08:39:29", updated_at: "2019-01-17 08:39:29">
When in doubt always write just any test. If it passes then you’re good to do into production with your fully tested to hell code.
require 'spec_helper'
RSpec.describe 'so much specs' do
it 'represents the level of care I put into my code' do
expect { some bullshit }.to raise_error
end
end
I am not a fan of reading a Rails log and seeing a silly number of queries to the database particularly if they are for the same thing time and time again, sometimes exactly the same query but also sometimes a similar query that’s repeated for each object in a list. Yes, you can use includes to preload associations e.g Post.includes(:something) but what about something more complex like a ratings count.
The following is your pretty average polymorphic ratings table. Each Post has multiple ratings, and you just want to show the score.
class Post
has_many :ratings, as: :rateable
def score
(ratings.pluck('AVG(score)')[0] || 0).to_i
end
end
class Ratings
belongs_to :rateable, polymorphic: true
end
So what happens when you display this as a list, something like this….
(0.3ms) SELECT AVG(score) FROM `ratings` WHERE `ratings`.`rateable_id` = 1 AND `ratings`.`rateable_type` = 'Post'
Rendered posts/_post.json.jbuilder (1.6ms)
(0.3ms) SELECT AVG(score) FROM `ratings` WHERE `ratings`.`rateable_id` = 2 AND `ratings`.`rateable_type` = 'Post'
Rendered posts/_post.json.jbuilder (1.6ms)</pre></code>
(0.3ms) SELECT AVG(score) FROM `ratings` WHERE `ratings`.`rateable_id` = 3 AND `ratings`.`rateable_type` = 'Post'
Rendered posts/_post.json.jbuilder (1.6ms)
(0.3ms) SELECT AVG(score) FROM `ratings` WHERE `ratings`.`rateable_id` = 4 AND `ratings`.`rateable_type` = 'Post'
Rendered posts/_post.json.jbuilder (1.6ms)
(0.3ms) SELECT AVG(score) FROM `ratings` WHERE `ratings`.`rateable_id` = 5 AND `ratings`.`rateable_type` = 'Post'
Rendered posts/_post.json.jbuilder (1.6ms)
Not particularly efficient. With every Post you increase the number of queries to the DB. Even if that is a millisecond 1,000 Post objects later it’s a second. And seconds feel clunky in a nice swanky interface. We could include the ratings objects for each Post.includes(:ratings) but that’s not very efficient either. We just need the count not multiple new Rating objects.
We could, however, add the scores count to the SELECT query. With ActiveRecord magic anything in the SELECT turns into an attribute of the object you’re fetching in this case Post. If you’re wondering what COALESCE means here, if there are no ratings the AVG of NULL will be NULL. COALESCE returns first NOTNULL result. COALESCE will return 0.
@posts = Post
.group(:id)
.left_joins(:ratings).distinct
.select('cafes.*, COALESCE(AVG(ratings.score), 0) AS score')
Then we can alter the Post#score method to check it’s attributes first.
def score
(attributes['score'] || ratings.pluck('AVG(score)')[0] || 0).to_i
end
So we can king of eager load results. instead of fetching them in multiple hits to the database. Cool huh? And it’s shaved off 40ms on the DB time so while in this application it’s not going to kill us, why not be efficient from the outset and not fill the log with unnecessary queries.
Completed 200 OK in 382ms (Views: 337.1ms | ActiveRecord: 42.3ms)
vs..
Completed 200 OK in 282ms (Views: 262.6ms | ActiveRecord: 4.4ms)
In the absence of a graphics editor on this machine, I just needed a quick way of padding an image that already had a white background. This just adds additional white padding around the existing image which was about 700×1024
convert ~/Desktop/concordia.gif -gravity center -background white -extent 1600x1124 ~/Desktop/concordia-resize.jpg
Bit of Ruby fun. I really like #tap for cleaning up blocks where you assign unnecessary variables. While I admit the following examples don’t all do the same thing I noticed something this morning. The first two should have concatenated the Array and last one oappended. It’s odd that the first two examples returned an empty Array and not [1, 1, 2, 3], or [[1 ], [1, 2, 3]]
2.2.6 :018 > [].tap { |a| a += [1]; a += [1,2,3] }
=> []
2.2.6 :018 > [].tap { |a| a += [[1]]; a += [[1,2,3]] }
=> []
2.2.6 :019 > [].tap { |a| a << [1]; a << [1,2,3] }
=> [[1], [1, 2, 3]]
So, the object returned from #tap is always the one you passed in, so for a start is doesn’t return that last thing evaluated in the block. Then you realise a += 1 returns a new object and reassigns the value of a, so it can’t be the original object and therefore you’re concatenating and creating a new object that isn’t returned.
We’ve been reworking the Boardgame Cafes Map and I think it’s come out quite well. The original map was built with multiple calls to the cafes API endpoint whenever you selected a new country it reloaded the page and data set which filtered down the results by country. The problem with this is that you couldn’t move around the map and discover cafes in the next country over or easily get back to the original full map without a costly page refresh, it felt clumsy and not very slick. With React a single call to the cafes API gives us the entire data set which we then build multiple components from, the country select list, the cafe list, the cafe markers and lightbox content.
I am sure there are many more features we can add to improve it but for now it’s relaunched and working great on desktop and mobile.