首页 > 数据库 >Let'sGoFurther - Chapter 6: SQL Migrations

Let'sGoFurther - Chapter 6: SQL Migrations

时间:2024-11-16 11:20:05浏览次数:1  
标签:Chapter movie migrations Let greenlight sql table sGoFurther check

 

 

Installing the migrate tool

To manage SQL migrations in this project we’re going to use the migrate command-line tool (which itself is written in Go).

On Linux and Windows, the easiest method is to download a pre-built binary and move it to a location on your system path.

Before you continue, please check that it’s available and working on your machine by trying to execute the migrate binary with the -version flag. It should output the current version number similar to this:

zzh@ZZHPC:~$ migrate -version
4.18.1

 

zzh@ZZHPC:/zdata/Github/greenlight$ migrate create -seq -ext=.sql -dir=./migrations create_movie_table
/zdata/Github/greenlight/migrations/000001_create_movie_table.up.sql
/zdata/Github/greenlight/migrations/000001_create_movie_table.down.sql

In this command:

  • The -seq flag indicates that we want to use sequential numbering like 0001, 0002, ... for the migration files (instead of a Unix timestamp, which is the default).
  • The -ext flag indicates that we want to give the migration files the extension .sql .
  • The -dir flag indicates that we want to store the migration files in the ./migrations directory (which will be created automatically if it doesn’t already exist).
  • The name create_movie_table is a descriptive label that we give the migration files to signify their contents.

 

File: migrations/000001_create_movie_table.up.sql

CREATE TABLE IF NOT EXISTS movie (
    id         bigserial                   PRIMARY KEY,
    created_at timestamp(0) with time zone NOT NULL DEFAULT NOW(),
    title      text                        NOT NULL,
    year       integer                     NOT NULL,
    runtime    integer                     NOT NULL,
    genres     text[]                      NOT NULL,
    version    integer                     NOT NULL DEFAULT 1
);

 

https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/

 

File: migrations/000001_create_movie_table.down.sql

DROP TABLE IF EXISTS movie;

 

zzh@ZZHPC:/zdata/Github/greenlight$ migrate create -seq -ext=.sql -dir=./migrations add_movie_check_constraints
/zdata/Github/greenlight/migrations/000002_add_movie_check_constraints.up.sql
/zdata/Github/greenlight/migrations/000002_add_movie_check_constraints.down.sql

 

File: migrations/000002_add_movie_check_constraints.up.sql

ALTER TABLE movie ADD CONSTRAINT movie_runtime_check CHECK (runtime >= 0);
ALTER TABLE movie ADD CONSTRAINT movie_year_check CHECK (year BETWEEN 1888 AND date_part('year', now()));
ALTER TABLE movie ADD CONSTRAINT movie_genres_length_check CHECK (array_length(genres, 1) BETWEEN 1 AND 5);

 

File: migrations/000002_add_movie_check_constraints.down.sql

ALTER TABLE movie DROP CONSTRAINT IF EXISTS movie_runtime_check;
ALTER TABLE movie DROP CONSTRAINT IF EXISTS movie_year_check;
ALTER TABLE movie DROP CONSTRAINT IF EXISTS movie_genres_length_check;

 

 

Executing the migrations

zzh@ZZHPC:/zdata/Github/greenlight$ migrate -path=./migrations -database=$GREENLIGHT_DB_DSN up
1/u create_movie_table (8.585235ms)
2/u add_movie_check_constraints (13.254208ms)

 

At this point, it’s worth opening a connection to your database and listing the tables with the \dt meta command:

zzh@ZZHPC:~$ docker exec -it postgres17 psql --dbname=greenlight --username=greenlight
psql (17.1 (Debian 17.1-1.pgdg120+1))
Type "help" for help.

greenlight=> \dt
                List of relations
 Schema |       Name        | Type  |   Owner    
--------+-------------------+-------+------------
 public | movie             | table | greenlight
 public | schema_migrations | table | greenlight
(2 rows)

greenlight=>

You should see that the movies table has been created, along with a schema_migrations table, both of which are owned by the greenlight user.

The schema_migrations table is automatically generated by the migrate tool and used to keep track of which migrations have been applied. Let’s take a quick look inside it:

greenlight=> SELECT * FROM schema_migrations;
 version | dirty 
---------+-------
       2 | f
(1 row)

The version column here indicates that our migration files up to (and including) number 2 in the sequence have been executed against the database. The value of the dirty column is false , which indicates that the migration files were cleanly executed without any errors and the SQL statements they contain were successfully applied in full.

If you like, you can also run the \d meta command on the movies table to see the structure of the table and confirm that the CHECK constraints were created correctly. Like so:

greenlight=> \d movie
                                        Table "public.movie"
   Column   |            Type             | Collation | Nullable |              Default              
------------+-----------------------------+-----------+----------+-----------------------------------
 id         | bigint                      |           | not null | nextval('movie_id_seq'::regclass)
 created_at | timestamp(0) with time zone |           | not null | now()
 title      | text                        |           | not null | 
 year       | integer                     |           | not null | 
 runtime    | integer                     |           | not null | 
 genres     | text[]                      |           | not null | 
 version    | integer                     |           | not null | 1
Indexes:
    "movie_pkey" PRIMARY KEY, btree (id)
Check constraints:
    "movie_genres_length_check" CHECK (array_length(genres, 1) >= 1 AND array_length(genres, 1) <= 5)
    "movie_runtime_check" CHECK (runtime >= 0)
    "movie_year_check" CHECK (year >= 1888 AND year::double precision <= date_part('year'::text, now()))

greenlight=>

 

 

标签:Chapter,movie,migrations,Let,greenlight,sql,table,sGoFurther,check
From: https://www.cnblogs.com/zhangzhihui/p/18549172

相关文章

  • chainWebpack: config => { // 移除 preload(预载) 插件 config.plugins.dele
    在VueCLI项目中,chainWebpack是一个用于自定义Webpack配置的钩子。通过chainWebpack,你可以对Webpack配置进行更细粒度的控制。你提到的代码片段的作用是移除preload和prefetch插件。下面是对这段代码的详细解释:代码解析chainWebpack:config=>{//移除preload......
  • GoFurther - Chapter 5: Database Setup and Configuration
     zzh@ZZHPC:/zdata/Github/greenlight$dockerpullpostgreszzh@ZZHPC:/zdata/Github/greenlight$dockerrun--namepostgres17-p5432:5432-ePOSTGRES_USER=root-ePOSTGRES_PASSWORD=root-dpostgres:latest zzh@ZZHPC:~$dockerexec-itpostgres17psql......
  • delete
    我曾经学过的知识,对这个世界的认知,以及什么能做什么不能做的道德感,都被淡忘了。我本能不喜欢把重复的作文题目写两遍,但这些过程还得咬着牙再来一遍吧,我还得做一个坚强的人。多时认为,多个朋友多条路,不友善的朋友大不了就删了。但细细思考一下,我并未考虑过删人对他人的影响。但这......
  • 【Chapter 4】Machine Learning Regression Case_Second hand Car Price Prediction-X
    文章目录一、XGBoostAlgorithm二、ComparisonofalgorithmimplementationbetweenPythoncodeandSentosa_DSMLcommunityedition(1)Datareadingandstatisticalanalysis(2)dataprocessing(三)Featureselectionandcorrelationanalysis(4)Samplepartit......
  • Java 网络编程----初探Servlet
    JaveWeb是java面向web开发的相关技术,他是相关技术的统称,并不是指某一个单一的技术。在我之前的博客中(Java网络编程----通过实现简易聊天工具来聊聊BIO模型https://www.cnblogs.com/jilodream/p/17405923.htm),就已经写到过java可以作为一个服务器(如TCP/UDP),接收外部的请求。如使用T......
  • Java8 CompletableFuture异步任务
    无返回值调用importjava.util.concurrent.CompletableFuture;publicclassTestDemo{publicstaticvoidmain(String[]args){System.out.println("进入主线程=============");CompletableFuture.runAsync(()->getNum());System.......
  • 避免将 let 与数组和对象一起使用
    首先,让我们明确一点……使用let数组和对象本身根本不成问题。但有一些特定的注意事项可能会导致在某些情况下避免使用它:重新分配与变异众所周知,let允许我们重新分配变量本身,这可能会导致混乱或数据丢失。对象/数组可能会意外地被重新分配一整套新数据(新对象/新数组)。使用const......
  • MySQL 中的 DELETE、TRUNCATE 和 DROP:深入理解与应用
    MySQL中的DELETE、TRUNCATE和DROP:深入理解与应用在MySQL数据库管理中,DELETE、TRUNCATE和DROP是三个常用的命令,用于删除表中的数据或表本身。尽管它们的目的相似,但它们的执行原理和效果却大不相同。本文将深入探讨这三个命令的区别、执行原理以及适用场景。1.DELETE:逐......
  • 2024/11/12日 日志 关于Servlet ---- Request(请求)& Response(响应) 的补充
    Request(请求)&Response(响应)--·Request:获取请求数据--·Response:设置响应数据Request点击查看代码--Request继承体系--ServletRequestJava提供的请求对象根接口--HttpServletRequestJava提供的对Http协议封装的请求对象接口--RequestFacade......
  • 《TCP/IP网络编程》学习笔记 | Chapter 11:进程间通信
    《TCP/IP网络编程》学习笔记|Chapter11:进程间通信《TCP/IP网络编程》学习笔记|Chapter11:进程间通信进程间通信的基本概念通过管道实现进程间通信通过管道进行进程间双向通信运用进程间通信习题(1)什么是进程间通信?分别从概念和内存的角度进行说明。(2)进程间通信需要......