Cursor + Modern Rails: From Zero to Production


I took a side project from zero to a live HTTPS app with auth, transactional email, and catch-all inbox forwarding. I did it while pair programming with Cursor and using the current Rails and deployment ecosystem. Here’s every step that got me there.

What We Built

A small Rails 8 app for hosts and cleaning crews. Sign-in, roles (admin / host / cleaner), password reset, and a full production deploy with:

  • Hetzner for the server (and a floating IP so the box can be replaced without touching DNS)
  • Kamal for build and deploy (Docker, one command)
  • Cloudflare for DNS and SSL (and later, inbound email)
  • Amazon SES for outbound mail (e.g. password reset)
  • Cloudflare Email Routing for catch-all *@domain to Gmail

Rails 8, SQLite everywhere, native Rails authentication (no Devise), Solid Queue for background jobs. The kind of stack that stays simple until you need more.

Why Cursor and This Stack

I wanted to move fast without hand-holding a junior dev through every step. Cursor was the pair programmer: it knew the codebase, suggested config, and caught mistakes (e.g. env vars not making it into the container). Modern Rails gave us a single bin/rails new app, built-in auth generators, and a deployment story that matches the Docker + Kamal approach. No custom Capistrano or Heroku. Just a config/deploy.yml, a registry, and bin/kamal deploy.


Step 1: Local Setup

Before writing app code we got the whole toolchain installed and documented so that anyone (or future me) could run the app with bin/setup and bin/rails s.

What went in:

  • Ruby: Pinned via asdf and a project .tool-versions (and later .ruby-version for CI and Docker). We started on 3.3.x and later moved to 3.4.x so the same version runs locally, in the Dockerfile, and in CI.
  • Rails: gem install rails -v "~> 8.1" so we got 8.1.x.
  • Node.js: For asset pipeline and any future React. We used an LTS version (20 or 22).
  • SQLite: No install needed on the Mac. Rails uses it for development, test, and (for this app) production too.
  • Docker: Docker Desktop or Docker from Homebrew. Kamal and Rails’ default container setup expect the Docker daemon to be running.
  • Kamal: gem install kamal so we could run kamal init and later bin/kamal deploy.

Verify your setup. One-liners to confirm each piece is installed and at a compatible version (run from the project directory, or anywhere for the first four):

ruby -v                     # expect 3.4.x (or whatever your
                            # .ruby-version says)
rails -v                    # or: bundle exec rails -v;
                            # expect 8.1.x
node -v                     # expect v20.x or v22.x (LTS)
sqlite3 --version            # Rails will use it; this confirms
                             # the CLI is there if you need it
docker --version && docker info   # second command fails if the
                                  # daemon isn't running
kamal version

We also added a small check-setup script that runs all of the above (and a few extras) in one go. Running that script before coding saved time when something was missing. Cursor helped keep the setup doc and the script in sync with the app.


Step 2: Auth and Roles

We used Rails’ native authentication (no Devise). bin/rails generate authentication gave us sessions, secure cookies, and a password-reset flow with signed, expiring tokens.

Users: A User model with email, password_digest, first_name, and last_name. Emails are normalized (downcased, stripped) so we don’t get duplicate accounts by case.

Roles: Many-to-many. A roles table (e.g. admin, host, cleaner), a user_roles join table, and user.roles / user.role_names. We added helpers like user.admin? and user.host? so controllers and views stay readable.

Authorization: CanCanCan. The Ability model grants manage :all and access :admin only to users who have the admin role. Non-admins hitting /admin get redirected with “not authorized.”

Flows:

  • Login: GET /session/new to sign-in form, then POST /session, then redirect to root or return_to.
  • Logout: DELETE /session (e.g. “Sign out” link).
  • Password reset: “Forgot password?” to GET /passwords/new. User enters email. App sends a reset link via PasswordsMailer.reset(user).deliver_later. The link goes to GET /passwords/:token/edit. User sets a new password and can sign in again. In development, mail is not sent unless you run the job processor. We ran bin/jobs in a second terminal (or used bin/dev, which starts both the server and the job worker) so the reset email actually fired.

We added an admin area at /admin (a simple dashboard) behind Admin::BaseController, which uses CanCanCan to ensure only admins can access it. Cursor was useful for wiring the mailer, the reset flow, and the “forgot password” UX without leaving the codebase.


Step 3: Tests

Rails 8 ships with Minitest. We kept it and made sure every current behavior was covered before deploying.

What we tested:

  • Home: Unauthenticated users see a sign-in link. Authenticated users see sign-out and (for admins) a link to the admin dashboard.
  • Sessions: New form, create with valid/invalid credentials, destroy (sign out).
  • Passwords: New form, create (enqueues mail; unknown user gets the same “instructions sent” message for security). Edit with valid/invalid token, update with success and with password mismatch.
  • Admin: Unauthenticated gets redirect to sign-in. Authenticated but not admin gets redirect to root with “not authorized.” Admin gets dashboard.
  • Models: User (email normalization, role_names, admin?, name), Role (name presence and uniqueness), UserRole (associations), Ability (admin can access admin, others cannot), Session (belongs to user).
  • PasswordsMailer: Reset email is sent to the user’s email with the right subject (“Reset your password”) and exactly one delivery.

We used fixtures for users and roles and a small session helper (sign_in_as(user)) so request tests could act as a given user by setting the signed cookie. After this step we had on the order of 30+ tests and 80+ assertions, all passing. No fancy test framework. Just enough to deploy with confidence.


Step 4: Deploy (Hetzner + Kamal + Docker Hub)

Deploy was broken into small, ordered steps so we could verify each piece before moving on.

4.a: Hetzner, server and access

We created a Hetzner Cloud project (if needed), spun up a server (e.g. CX22 or CPX11), and added an SSH public key so we could ssh root@<server-ip>. We also created a floating IP and attached it to that server. The floating IP is what we use everywhere (Kamal, DNS). When we eventually replace the server we only reassign the same floating IP to the new box and never touch Cloudflare or Kamal config.

4.b: Kamal, point at the server

In config/deploy.yml we set servers.web to the floating IP (or the server IP if we hadn’t set up the float yet). No deploy yet. Just config.

4.c: Kamal, registry (Docker Hub)

The app image has to live somewhere. We used a private Docker Hub repository so only we (and the deploy server) can pull it.

  • Created the repo on hub.docker.com (e.g. username/turn-genius, private).
  • In Docker Hub, Account, Security we created an Access Token with Read, Write, Delete (Write is required for push). We stored that token in a local .env.local file as KAMAL_REGISTRY_PASSWORD=.... .env.local is gitignored and never committed.
  • In config/deploy.yml we set the registry username and image to match our Docker Hub username and repo. We did not set a server for the registry. Docker Hub is the default, and leaving server out avoids Kamal/Docker login issues where push fails with “access denied” or “insufficient_scope” even with a valid token.
  • We use .kamal/secrets to pass through KAMAL_REGISTRY_PASSWORD from the environment (so the deploy command reads from the shell that has source .env.local). No raw token in the repo.

Before every deploy we run source .env.local in the same terminal (or source .env.local && bin/kamal deploy) so Kamal and the app container get all secrets.

4.d: First deploy (no domain)

We needed the app reachable at http://<server-ip>:3000 (or whatever port Kamal uses) to confirm the container and Kamal flow worked.

  • We added RAILS_MASTER_KEY to .env.local (from config/master.key or from bin/rails credentials:edit if we had to create it). Kamal injects it via env.secret so the container can decrypt Rails credentials.
  • We ran source .env.local && bin/kamal deploy. The first time we hit a push failure until the Docker Hub token had Write permission. After fixing that, the image pushed and the app booted on the server.
  • We opened http://<server-ip>:3000 in a browser and confirmed the app loaded. No domain or SSL yet.

4.e: Domain + Cloudflare DNS

We pointed the real domain at the server so we could add SSL and use real hostnames.

  • In Cloudflare (DNS for the domain), we added two A records: one for www and one for @ (root). Both pointed at the floating IP. We could choose “DNS only” (grey cloud) or “Proxied” (orange). Both work. We waited for DNS to propagate (dig www.turngenius.com etc.) then hit http://www.turngenius.com and got a 200 from the app.

4.f: Kamal, SSL

We enabled Kamal’s proxy with ssl: true and a hosts list so both the apex and www got HTTPS via Let’s Encrypt.

  • In config/deploy.yml we turned on the proxy and set hosts to both the root domain and www (e.g. turngenius.com and www.turngenius.com). Kamal uses kamal-proxy under the hood.
  • We deployed again. The proxy obtained certificates and terminated TLS for both hostnames.
  • In Cloudflare we set SSL/TLS mode to Full so the edge-to-origin connection was encrypted too.
  • In config/environments/production.rb we set config.assume_ssl, config.force_ssl, and config.hosts so Rails trusted the proxy and didn’t reject the request host.

4.g: Smoke check

We visited https://www.turngenius.com and https://turngenius.com, signed in, hit the admin dashboard, and confirmed SSL and behavior were correct.

Ruby version: We kept Ruby in sync everywhere. .ruby-version, .tool-versions, and the Dockerfile all use the same version (e.g. 3.4.x) so CI and production match local. When we bumped Ruby we updated those files, ran asdf install ruby X.Y.Z, bundle install, and bin/rails db:prepare, then re-ran tests and deploy.


Step 5: Transactional Email (Amazon SES)

Production needed to send real emails (e.g. password reset). Cloudflare doesn’t offer outbound SMTP, so we used Amazon SES via SMTP. No extra gem. Just Rails’ built-in SMTP support.

One-time setup:

  • In AWS we opened Amazon SES (e.g. in us-east-1). We verified the sending domain (turngenius.com). SES gave us DKIM CNAME records. We added those in Cloudflare DNS. Once SES showed “Verified,” we could send from any address at that domain (e.g. support@turngenius.com). Alternatively you can verify a single email address first.
  • We created SMTP credentials in SES (Account dashboard, SMTP settings, Create SMTP credentials). AWS creates an IAM user and shows the SMTP username and SMTP password once. We copied both. These are not your normal AWS access keys.
  • New SES accounts start in sandbox. You can only send to verified addresses. To send to any address we requested production access in the SES console.

In the app:

  • In config/environments/production.rb we configured Action Mailer to use SMTP when SES_SMTP_USERNAME and SES_SMTP_PASSWORD are set (port 587, STARTTLS, host email-smtp.<region>.amazonaws.com). Otherwise we left deliveries disabled so we didn’t accidentally send mail in prod without credentials.
  • In config/deploy.yml we added those two vars to env.secret and set MAIL_FROM (e.g. TurnGenius <support@turngenius.com>) and AWS_REGION in env.clear. The mailer uses MAIL_FROM as the default “from” address. It must be a verified identity in SES.
  • We put SES_SMTP_USERNAME and SES_SMTP_PASSWORD in .env.local and made sure .kamal/secrets passes them through from the environment. Critical: Kamal builds the env file from the current shell when you run bin/kamal deploy. If you don’t run source .env.local in the same shell first, the container won’t have the SES vars and mail will “succeed” without actually sending. We deploy with source .env.local && bin/kamal deploy every time.

Test: We went to the live site, used “Forgot password?” with an email that existed in the app (and, in sandbox, was verified in SES), submitted, and confirmed the reset email arrived. In production we could then send to any address after SES production access was approved.


Step 6: Inbound Email (Cloudflare Email Routing)

We wanted *@turngenius.com to land in a single Gmail inbox (e.g. for support@, noreply@, or random addresses) without paying for a separate forwarding service.

  • We enabled Cloudflare Email Routing for the zone. Cloudflare gave us MX and TXT records to add. We added them in DNS.
  • We already had SPF for outbound (SES). We combined both Cloudflare (for receiving) and Amazon SES (for sending) in one SPF record, e.g. v=spf1 include:_spf.mx.cloudflare.net include:amazonses.com ~all.
  • We verified the destination Gmail address (Cloudflare sends a one-time code to that inbox).
  • We created a catch-all rule: “Send to an email” to that Gmail address. Now any address at the domain forwards there.
  • We tested with SendTestMail.com (no signup required). Delivered. Some anonymous “fake mailer” tools get categorized by Cloudflare as “Other” and may not be forwarded. A normal sender or a proper test tool works.

What Went Right

  • One deploy command. source .env.local && bin/kamal deploy builds the image, pushes to the registry, pulls on the server, runs migrations via the app, and refreshes the proxy. No custom scripts.
  • Floating IP. DNS and Kamal both point at the same IP. When we replace the server we only reassign that IP in Hetzner and run deploy again.
  • SSL and two hostnames. Kamal’s proxy handled both apex and www with a single hosts list and Let’s Encrypt.
  • Secrets in one place. .env.local holds tokens and keys. .kamal/secrets forwards them into Kamal. No credentials in the repo or in this post.
  • Cursor as pair programmer. It suggested the right config blocks, caught missing env vars, and helped with Cloudflare vs. SES vs. Kamal docs so we didn’t chase dead ends.

If You Try This

  • Use a floating IP (or stable DNS) from day one so you can replace the server without redoing DNS.
  • Before every deploy: run source .env.local (or source .env.local && bin/kamal deploy) so the container gets registry, RAILS_MASTER_KEY, and SES credentials.
  • For SES: verify the domain (or the from address) in SES, create SMTP credentials (not normal AWS keys), and put them where Kamal will inject them (e.g. .env.local + .kamal/secrets). If mail doesn’t send, check that the env vars are present in the container (e.g. bin/kamal app exec env).
  • For inbound catch-all, Cloudflare Email Routing is free and fits if you’re already on Cloudflare. Combine SPF with your outbound provider (e.g. SES) in a single TXT record.
  • Test forwarding with a real sender or a tool like SendTestMail. Some anonymous mailers get filtered and don’t arrive.

Rails 8 + Kamal + Hetzner + Cloudflare + SES is a solid stack for a small app. Cursor made it easy to keep the app, the config, and the docs in sync without leaving the editor. If you’re weighing modern Rails plus an AI pair programmer, this combo is worth trying.