Just updated this blog to Rails 5.0.0.rc1 from 4.2, since this is a pretty small hobby blog app the update was quite simple and smooth.
1) Update your Gemfile, then
bundle update rails
# core part of rails 5 gem 'rails', '~> 5.0.0.rc1' gem 'turbolinks', '~> 5.x' # use the same web server gem 'puma', '~> 3.0' # use latest master branch to make bundle resolve dependencies gem 'kaminari', github: 'amatsuda/kaminari' gem 'simple_form', github: 'plataformatec/simple_form' gem 'rspec-rails', github: 'rspec/rspec-rails', tag: 'v3.5.0.beta3' # Fix for https://github.com/jnicklas/capybara/issues/1592 gem 'capybara', github: 'jnicklas/capybara'
rails app:update to update the configuration files(
d to check the diff and
y to overwrite if it's ok.
app/models/application_record.rb like this
class ApplicationRecord < ActiveRecord::Base self.abstract_class = true end
4) Update for turbolinks
This is equal to bind on
DOMContentLoaded, or jQuery
$(document).on "turbolinks:load", ->
Also check the official guide on upgrade your rails app
# in config/initializers/delayed_job_config.rb Delayed::Worker.delay_jobs = true # in your spec Delayed::Worker.new.work_off
Assume you follow DelayedJob readme example to configure it like this:
Delayed::Worker.delay_jobs = !Rails.env.test?, what it does is in test env it doesn’t delay the job, meaning DelayedJob is being transparent, the job you put will be executed in “real time”. In most cases you don’t even need to worry about it, your test should be just fine, but recently it caught me…
To give some background, I’m working on a API centric rails project. In order to authenticate with API we pass in access token for every request, and that’s done in the middleware layer. Since access token is stored in cookie, and in middleware we can’t access browser cookie directly, so another tool called RequestStore is used. If in the same request, what you stored in RequestStore you can access it later no matter the context, a unrealistic example would be you store a cookie value to RequestStore then use it in model later. Don’t do that :).
The code below is a simplified version to illustrate the flow.
class ApplicationController < ActionController::Base before_action :set_api_access_token def set_api_access_token RequestStore.store[:access_token] = cookies.signed[:access_token] end end class Authentication < Faraday::Middleware def call(env) env[:request_headers]['Authorization'] = RequestStore[:access_token] if RequestStore[:access_token] @app.call(env) end end
Every api request happened inside the rails
ApplicationController stack should have the access token being set, but what would happen in a different context like rake task or DelayedJob where you need to send request to the API? The
before_action is not gonna be executed there so
RequestStore[:access_token] would be nil. This is an easy-to-spot issue if you try it once, but if you follow the TDD work flow and write test for it first, then it’ll fail you.
Delayed::Worker.delay_jobs set to
false in test env, the job will be executed immediately in the same request, so the
RequestStore[:access_token] still contains the value and will pass to the Authorization header in the middleware, spec passed but but in real world env it failed. Typical false positive result.
# in config/initializers/delayed_job_config.rb Delayed::Worker.delay_jobs = true # in your spec # here is the code to enqueue a job to DelayedJob queue visit post_path(post) # run it manually Delayed::Worker.new.work_off # expectation expect(api_endpoint).to have_been_requested end
Delayed::Worker.new.work_off returns an Array like
[1, 0] indicating succeeded job counts and failed job counts. I’ve also seen some people testing against that like
expect(Delayed::Worker.new.work_off).to eq([1, 0]), personally I don’t think it’s necessary.
[2, 0]? That’s just noise.
I guess what I encountered is a rare case, but definitely an interesting case. I kinda prefer this way to mimic real world environment to prevent any possible regressions.
Commercial time: If you’re about to build a API centric rails app, be sure to check out the awesome gem called spyke made by @balvig, the slogan is “Interact with remote REST services in an ActiveRecord-like manner.”
ActiveJob is the headline feature in Rails 4.2, the Active Job Basics on RailsGuides explains the philosophy and usage very well, make sure you’ve checked that first. However there’re some gotchas if you want to use it right now in your Rails 4.1 app. Here I’m gonna show you how to install it in 4.1, and things you need to take extra care of.
activejob to your
active_job.rb file under
config/initializers and paste code below.
require 'active_job' # or any other supported backend such as :sidekiq or :delayed_job ActiveJob::Base.queue_adapter = :inline
Now you should be abel to load ActiveJob in your rails app without error.
Note that the one you installed is not the one inside the rails repository, that has a version of 4.2.0.beta2 same as Rails at the time of writing, the one you installed is version 0. You can find the archived source code from its original repository.
To create a job, you have to manually create the
app/jobs folder first, then follow the same naming convention to create your job class file like
class GuestsCleanupJob < ActiveJob::Base queue_as :default def perform(*args) # Do something later end end
GuestsCleanupJob.enqueue(record) GuestsCleanupJob.enqueue(record, options)
enqueuehas changed to
# Rails 4.2.beta2 Rails.application.config.active_job.queue_adapter = :delayed_job # Rails 4.1 ActiveJob::Base.queue_adapter = :delayed_job
p.s. I haven’t checked ActionMailer. I’m currently using it with DelayedJob and so far so good.
ActiveJob is very convenient, it provides a unified interface for the job infrastructure that allows you to switch the backend easily.
But as you can see there’re big diffs between the latest developed version and the one now we’re able to install in Rails 4.1.
Is it worth it to make the effort to try it now, and push these small upcoming changes when you upgrade to Rails 4.2 to your mental stack? My suggestion is if you’re just right about to implement a queue system and willing to adapt to it, then it’s OK, otherwise maybe better just leave the current app running as is and wait for a more mature timing.
When you run
rake middleware you'll see
use ActionDispatch::Static in the development mode but not in production mode.
In normal case, you should not set that to
true unless for trying production mode on local machine. As the comment on the source code suggested:
Disable Rails's static asset server (Apache or nginx will already do this)
In your rails app, when you need an envionment variable like
ENV['TWITTER_CONSUMER_KEY'] for local deveopment, where do you put them? Simply set them before you start your rails server as a one time thing, or just put them under ~/.profile or ~/.zshrc?
Well it works but I'm not happy with that. Because first it belongs to a specific project and expose them to global env make me feel a little bit uncomfortable, second, what if you happend to have more than one twitter integrated app, how do you name the variables to solve naming collision?
If you're using Pow, there is a perfect solution for you.
Pow provides these 2 files for you to config pow and setup any environment variables.
Before an application boots, Pow attempts to execute two scripts — first .powrc, then .powenv — in the application's root. Any environment variables exported from these scripts are passed along to Rack.
Convention here is putting
.powrc under git version control, and override or setup any project specific environment variables to
BTW, you must run this command to restart pow manually so these scripts will be loaded.
$ touch ~/.pow/restart.txt
[Pow Document: Customizing Environment Variables](http://pow.cx/manual.html#section_2.2)