Skip to main content
Version: 6.4

第 3 章:项目设置

到目前为止,我们只是在摆弄我们的实体,让我们开始构建一些真实的东西。我们说我们将使用 Fastify 作为 Web 服务器,并使用 Vitest 对其进行测试。让我们设置它,创建我们的第一个端点并对其进行测试。

¥So far we were just toying around with our entities, let's start building something real. We said we will use Fastify as a web server, and Vitest for testing it. Let's set that up, create our first endpoint and test it.

Fastify

让我们在 src 目录中创建新文件 app.ts,并从中导出一个 bootstrap 函数,在那里我们创建 fastify 应用实例。还记得我们如何分叉 EntityManager 以绕过全局上下文验证吗?对于 Web 服务器,我们可以利用中间件或在 fastify 钩子中自动实现唯一的请求上下文。MikroORM 提供了一个名为 RequestContext 的方便助手,可用于为每个请求创建分支。EntityManager 知道这个类并尝试自动从中获取正确的上下文。

¥Let's create new file app.ts inside src directory, and export a bootstrap function from it, where we create the fastify app instance. Remember how we were forking the EntityManager to get around the global context validation? For web servers, we can leverage middlewares, or in fastify hooks, to achieve unique request contexts automatically. MikroORM provides a handy helper called RequestContext which can be used to create the fork for each request. The EntityManager is aware of this class and tries to get the right context from it automatically.

RequestContext helper work?如何

在内部,所有与身份映射一起使用的 EntityManager 方法(例如 em.find()em.getReference())首先调用 em.getContext() 来访问上下文分支。此方法将首先检查我们是否在 RequestContext 处理程序中运行,并优先从中分叉 EntityManager

¥Internally all EntityManager methods that work with the Identity Map (e.g. em.find() or em.getReference()) first call em.getContext() to access the contextual fork. This method will first check if we are running inside RequestContext handler and prefer the EntityManager fork from it.

// we call em.find() on the global EM instance
const res = await orm.em.find(Book, {});

// but under the hood this resolves to
const res = await orm.em.getContext().find(Book, {});

// which then resolves to
const res = await RequestContext.getEntityManager().find(Book, {});

然后,RequestContext.getEntityManager() 方法检查我们在 RequestContext.create() 方法中用于创建新 EM 分叉的 AsyncLocalStorage 静态实例。

¥The RequestContext.getEntityManager() method then checks AsyncLocalStorage static instance we use for creating new EM forks in the RequestContext.create() method.

Node.js 核心中的 AsyncLocalStorage 类是这里的魔术师。它允许我们在整个异步调用过程中跟踪上下文。它允许我们通过全局 EntityManager 实例将 EntityManager 分支创建(通常在中间件中,如上一节所示)与其使用分离。

¥The AsyncLocalStorage class from Node.js core is the magician here. It allows us to track the context throughout the async calls. It allows us to decouple the EntityManager fork creation (usually in a middleware as shown in the previous section) from its usage through the global EntityManager instance.

app.ts
import { MikroORM, RequestContext } from '@mikro-orm/core';
import { fastify } from 'fastify';

export async function bootstrap(port = 3001) {
const orm = await MikroORM.init();
const app = fastify();

// register request context hook
app.addHook('onRequest', (request, reply, done) => {
RequestContext.create(orm.em, done);
});

// shut down the connection when closing the app
app.addHook('onClose', async () => {
await orm.close();
});

// register routes here
// ...

const url = await app.listen({ port });

return { app, url };
}

并在 server.ts 文件中使用此函数 - 你可以清除迄今为止的所有代码并将其替换为以下内容:

¥And use this function in the server.ts file - you can wipe all the code you had so far and replace it with the following:

server.ts
import { bootstrap }  from './app.js';

try {
const { url } = await bootstrap();
console.log(`server started at ${url}`);
} catch (e) {
console.error(e);
}

现在再次点击 npm start,你应该看到类似以下内容:

¥Now hitting the npm start again, you should see something like this:

[info] MikroORM version: 5.4.3
[discovery] ORM entity discovery started, using TsMorphMetadataProvider
[discovery] - processing 4 files
[discovery] - processing entity Article (./blog-api/src/modules/article/article.entity.ts)
[discovery] - using cached metadata for entity Article
[discovery] - processing entity Tag (./blog-api/src/modules/article/tag.entity.ts)
[discovery] - using cached metadata for entity Tag
[discovery] - processing entity User (./blog-api/src/modules/user/user.entity.ts)
[discovery] - using cached metadata for entity User
[discovery] - processing entity BaseEntity (./blog-api/src/modules/common/base.entity.ts)
[discovery] - using cached metadata for entity BaseEntity
[discovery] - entity discovery finished, found 5 entities, took 78 ms
[info] MikroORM successfully connected to database sqlite.db
server started at http://127.0.0.1:3001

服务器正在运行,很好!要停止它,请按 CTRL + C

¥The server is running, good! To stop it, press CTRL + C.

用户配置文件端点

¥User profile endpoint

让我们添加我们的第一个端点 - GET /article 列出所有现有文章。它是一个公共端点,可以采用 limitoffset 查询参数并返回请求的项目以及所有可用文章的总数。

¥Let's add our first endpoint - GET /article which lists all existing articles. It is a public endpoint that can take limit and offset query parameters and return requested items together with the total count of all available articles.

我们可以使用 em.count() 来获取实体的数量,但由于我们想要返回分页实体列表旁边的计数,所以我们有更好的方法 - em.findAndCount()。此方法正是为此目的而设,即使用项目总数重新调整分页列表。

¥We could use em.count() to get the number of entities, but since we want to return the count next to the paginated list of entities, we have a better way - em.findAndCount(). This method serves exactly this purpose, retuning the paginated list with the total count of items.

app.ts
app.get('/article', async request => {
const { limit, offset } = request.query as { limit?: number; offset?: number };
const [items, total] = await orm.em.findAndCount(Article, {}, {
limit, offset,
});

return { items, total };
});

基本依赖注入容器

¥Basic Dependency Injection container

在我们开始测试第一个端点之前,让我们稍微重构一下,使设置更具面向未来性。添加一个新的 src/db.ts 文件,它将用作简单的依赖注入 (DI) 容器。它将导出 initORM() 函数,该函数将首先初始化 ORM 并将其缓存到内存中,因此以下调用将返回相同的实例。感谢顶层 await,我们可以初始化 ORM 并直接将其导出,但很快我们会想在这样做之前改变一些选项,以用于测试目的,而拥有这样的函数将有助于实现这一点。

¥Before we get to testing the first endpoint, let's refactor a bit to make the setup more future-proof. Add a new src/db.ts file, which will serve as a simple Dependency Injection (DI) container. It will export initORM() function that will first initialize the ORM and cache it into memory, so the following calls will return the same instance. Thanks to top-level await, we could just initialize the ORM and export it right ahead, but soon we will want to alter some options before we do so, for testing purposes, and having a function like this will help in achieving that.

请注意,我们正在从 @mikro-orm/sqlite 包导入所有 EntityManagerEntityRepositoryMikroORMOptions - 这些导出被输入到 SqliteDriver

¥Note that we are importing all of EntityManager, EntityRepository, MikroORM, Options from the @mikro-orm/sqlite package - those exports are typed to the SqliteDriver.

db.ts
import { EntityManager, EntityRepository, MikroORM, Options } from '@mikro-orm/sqlite';

export interface Services {
orm: MikroORM;
em: EntityManager;
article: EntityRepository<Article>;
user: EntityRepository<User>;
tag: EntityRepository<Tag>;
}

let cache: Services;

export async function initORM(options?: Options): Promise<Services> {
if (cache) {
return cache;
}

const orm = await MikroORM.init(options);

// save to cache before returning
return cache = {
orm,
em: orm.em,
article: orm.em.getRepository(Article),
user: orm.em.getRepository(User),
tag: orm.em.getRepository(Tag),
};
}

并在 app.ts 文件中使用它,而不是直接初始化 ORM:

¥And use it in the app.ts file instead of initializing the ORM directly:

app.ts
import { RequestContext } from '@mikro-orm/core';
import { fastify } from 'fastify';
import { initORM } from './db.js';

export async function bootstrap(port = 3001) {
const db = await initORM();
const app = fastify();

// register request context hook
app.addHook('onRequest', (request, reply, done) => {
RequestContext.create(db.em, done);
});

// shut down the connection when closing the app
app.addHook('onClose', async () => {
await db.orm.close();
});

// register routes here
app.get('/article', async request => {
const { limit, offset } = request.query as { limit?: number; offset?: number };
const [items, total] = await db.article.findAndCount({}, {
limit, offset,
});

return { items, total };
});

const url = await app.listen({ port });

return { app, url };
}
EntityManager and EntityRepository from driver package导入

虽然 EntityManagerEntityRepository 类由 @mikro-orm/core 包提供,但这些只是基础 - 驱动程序不可知 - 实现。其中一个例子是 QueryBuilder - 作为 SQL 概念,它在 @mikro-orm/core 包中没有位置,相反,SQL 驱动程序包提供了 EntityManager 的扩展 SqlEntityManager(它在 @mikro-orm/knex 包中定义,并在依赖于它的每个 SQL 驱动程序包中重新导出)。此 SqlEntityManager 类提供其他 SQL 相关方法,如 em.createQueryBuilder()

¥While EntityManager and EntityRepository classes are provided by the @mikro-orm/core package, those are only the base - driver agnostic - implementations. One example of what that means is the QueryBuilder - as an SQL concept, it has no place in the @mikro-orm/core package, instead, an extension of the EntityManager called SqlEntityManager is provided by the SQL driver packages (it is defined in @mikro-orm/knex package and reexported in every SQL driver packages that depend on it). This SqlEntityManager class provides the additional SQL related methods, like em.createQueryBuilder().

为方便起见,SqlEntityManager 类也在 EntityManager 别名下重新导出。这意味着我们可以执行 import { EntityManager } from '@mikro-orm/sqlite' 来访问它。

¥For convenience, the SqlEntityManager class is also reexported under EntityManager alias. This means we can do import { EntityManager } from '@mikro-orm/sqlite' to access it.

在底层,MikroORM 将始终使用此特定于驱动程序的 EntityManager 实现(你可以通过 console.log(orm.em) 验证它将是 SqlEntityManager 的一个实例),但为了让 TypeScript 理解它,你需要使用驱动程序包来导入它。这同样适用于 EntityRepositorySqlEntityRepository 类。

¥Under the hood, MikroORM will always use this driver-specific EntityManager implementation (you can verify that by console.log(orm.em), it will be an instance of SqlEntityManager), but for TypeScript to understand it, you will need to use the driver package to import it. The same applies to the EntityRepository and SqlEntityRepository classes.

import { EntityManager, EntityRepository } from '@mikro-orm/sqlite'; // or any other driver package

你还可以使用从驱动程序包导出的 MikroORMdefineConfigOptions,它们的工作原理类似,提供驱动程序类型而无需使用泛型。

¥You can also use MikroORM, defineConfig and Options exported from the driver package, it works similarly, providing the driver type without the need to use generics.

什么是 EntityRepository

¥What is EntityRepository

实体存储库是 EntityManager 之上的薄层。它们充当扩展点,因此你可以添加自定义方法,甚至更改现有方法。默认的 EntityRepository 实现只是将调用转发到底层 EntityManager 实例。

¥Entity repositories are thin layers on top of EntityManager. They act as an extension point, so you can add custom methods, or even alter the existing ones. The default EntityRepository implementation just forwards the calls to the underlying EntityManager instance.

EntityRepository 类带有实体类型,因此我们不必将其传递给每个 findfindOne 调用。

¥EntityRepository class carries the entity type, so we do not have to pass it to every find or findOne calls.

请注意,没有 "刷新存储库" 这样的东西 - 它只是 em.flush() 的快捷方式。换句话说,我们总是刷新整个工作单元,而不仅仅是这个存储库代表的单个实体。

¥Note that there is no such thing as "flushing a repository" - it is just a shortcut to em.flush(). In other words, we always flush the whole Unit of Work, not just a single entity that this repository represents.

测试端点

¥Testing the endpoint

第一个端点已准备就绪,让我们测试一下。你已经安装了 vitest 并可通过 npm test 使用,现在添加一个测试用例。将其放入 test 文件夹并使用 .test.ts 扩展名命名文件,以便 vitest 知道它是一个测试文件。

¥The first endpoint is ready, let's test it. You already have vitest installed and available via npm test, now add a test case. Put it into the test folder and name the file with .test.ts extension so vitest knows it is a test file.

那么你应该如何测试端点?Fastify 提供了一种通过 app.inject() 测试端点的简单方法,你需要做的就是在测试用例中创建 fastify 应用实例(你已经有了 bootstrap 方法)。但是,这会针对你的生产数据库进行测试,你不希望这样!

¥So how should you test the endpoint? Fastify offers an easy way to test endpoints via app.inject(), all you need to do is to create the fastify app instance inside the test case (you already have the bootstrap method for that). But that would be testing against your production database, you don't want that!

让我们在进行第一个测试之前再创建一个实用程序文件,并将其也放入 test 文件夹中,但没有 .test.ts 后缀 - 我们称之为 utils.ts。我们将定义一个名为 initTestApp 的函数,该函数使用覆盖的测试选项初始化 ORM,创建架构并引导我们的 fastify 应用,所有这些都在一次完成。它将以 port 编号作为参数,同样允许在测试时轻松并行运行 - 每个测试用例都有自己的内存数据库和在自己的端口上运行的 fastify 应用。

¥Let's create one more utility file before we get to the first test, and put it into the test folder too, but without the .test.ts suffix - let's call it utils.ts. We will define a function called initTestApp that initializes the ORM with overridden options for testing, create the schema and bootstrap our fastify app, all in one go. It will take the port number as a parameter, again to allow easy parallel runs when testing - every test case will have its own in-memory database and a fastify app running on its own port.

utils.ts
import { bootstrap } from '../src/app.js';
import { initORM } from '../src/db.js';
import config from '../src/mikro-orm.config.js';

export async function initTestApp(port: number) {
// this will create all the ORM services and cache them
const { orm } = await initORM({
// first, include the main config
...config,
// no need for debug information, it would only pollute the logs
debug: false,
// we will use in-memory database, this way we can easily parallelize our tests
dbName: ':memory:',
// this will ensure the ORM discovers TS entities, with ts-node, ts-jest and vitest
// it will be inferred automatically, but we are using vitest here
// preferTs: true,
});

// create the schema so we can use the database
await orm.schema.createSchema();

const { app } = await bootstrap(port);

return app;
}

现在终于到了测试用例。目前,没有数据,因为我们使用的是空的内存数据库,每次测试运行时都是最新的,因此文章列表端点将只返回一个空数组 - 我们稍后会处理这个问题。

¥And now the test case, finally. Currently, there is no data as we are using an empty in-memory database, fresh for each test run, so the article listing endpoint will return just an empty array - we will handle that in a moment.

请注意,我们使用 beforeAll 钩子来初始化应用,使用 afterAll 来拆除它 - app.close() 将导致调用 orm.close()onClose 钩子。没有这个,这个过程就会挂起。

¥Notice that we are using beforeAll hook to initialize the app and afterAll to tear it down - the app.close() will result in the onClose hook that calls orm.close(). Without that, the process would hang.

article.test.ts
import { afterAll, beforeAll, expect, test } from 'vitest';
import { FastifyInstance } from 'fastify';
import { initTestApp } from './utils.js';

let app: FastifyInstance;

beforeAll(async () => {
// we use different ports to allow parallel testing
app = await initTestApp(30001);
});

afterAll(async () => {
// we close only the fastify app - it will close the database connection via onClose hook automatically
await app.close();
});

test('list all articles', async () => {
// mimic the http request via `app.inject()`
const res = await app.inject({
method: 'get',
url: '/article',
});

// assert it was successful response
expect(res.statusCode).toBe(200);

// with expected shape
expect(res.json()).toMatchObject({
items: [],
total: 0,
});
});

现在运行 npm test - 但等等,又出问题了:

¥Now run npm test - but wait, something is broken again:

FAIL  test/article.test.ts [ test/article.test.ts ]
TypeError: Unknown file extension ".ts" for /blog-api/src/modules/article/article.entity.ts

所以我们实体的动态导入无法解析 TypeScript 文件。这是我们之前提到的 ECMAScript 模块的陷阱之一。幸运的是,我们有一个解决方法!Vitest 会自动从测试上下文中为 import 调用添加 TypeScript 支持 - 问题是 MikroORM 从其 CommonJS 代码库内部执行此类调用,因此 vitest 无法检测到它。我们可以做的是覆盖 dynamicImportProvider,这是用于实际导入的配置选项 - 顺便说一句,你可以像这样注册任何类型的转译器。

¥So the dynamic import of our entities fails to resolve TypeScript files. This is one of the gotchas of ECMAScript modules we mentioned earlier. And luckily, we have a workaround for it! Vitest automatically adds TypeScript support to import calls from the context of your test - the problem is that MikroORM does such calls from inside its CommonJS codebase, so vitest is not able to detect it. What we can do instead is to override the dynamicImportProvider, a config option used for the actual importing - by the way, you could register any kind of transpiler like this.

我们需要做的就是使用在我们的 ESM 应用上下文中定义的 import 调用(不一定在测试中),让我们将其添加到我们的 ORM 配置中:

¥All we need to do is to use an import call defined inside the context of our ESM application (not necessarily inside the test), let's add it to our ORM config:

mikro-orm.config.ts
// for vitest to get around `TypeError: Unknown file extension ".ts"` (ERR_UNKNOWN_FILE_EXTENSION)
dynamicImportProvider: id => import(id),

再次运行 npm test,你应该一切顺利:

¥Run npm test again, you should be good to go:

 ✓ test/article.test.ts (1)

Test Files 1 passed (1)
Tests 1 passed (1)
Start at 15:56:41
Duration 876ms (transform 264ms, setup 0ms, collect 300ms, tests 147ms)


PASS Waiting for file changes...
press h to show help, press q to quit

关于单元测试的说明

¥Note about unit tests

在某些不需要数据库连接的单元测试中,跳过 MikroORM.init() 阶段可能很诱人,但 init 方法所做的不仅仅是建立这一点。该方法最重要的部分是元数据发现,其中 ORM 检查所有实体定义并为各种元数据选项设置默认值(主要用于命名策略和双向关系)。

¥It might be tempting to skip the MikroORM.init() phase in some of your unit tests that do not require database connection, but the init method is doing more than just establishing that. The most important part of that method is metadata discovery, where the ORM checks all the entity definitions and sets up the default values for various metadata options (mainly for naming strategy and bidirectional relations).

propagation 需要发现阶段才能工作。但不用担心,你可以在不连接数据库的情况下初始化 ORM,只需向 ORM 配置提供 connect: false

¥The discovery phase is required for propagation to work. But worry not, you can initialize the ORM without connecting to the database, just provide connect: false to the ORM config:

const orm = await MikroORM.init({
// ...
connect: false,
});

从 v6 开始,你还可以使用新的 initSync() 方法同步实例化 ORM。这将仅运行发现,并跳过数据库连接。当你第一次尝试查询数据库(或以任何需要连接的方式使用它)时,ORM 将延迟连接到它。

¥Since v6, you can also use the new initSync() method to instantiate the ORM synchronously. This will run the discovery only, and skip the database connection. When you first try to query the database (or work with it in any way that requires the connection), the ORM will connect to it lazily.

同步方法永远不会连接到数据库,因此 connect: false 是隐式的。

¥The sync method never connects to the database, so connect: false is implicit.

const orm = MikroORM.initSync({
// ...
});

播种数据库

¥Seeding the database

有很多方法可以为你的测试数据库播种。显而易见的方法是直接在测试中执行此操作,例如在初始化 ORM 之后的 beforeAll 钩子中。

¥There are many ways how to go about seeding your testing database. The obvious way is to do it directly in your test, for example in the beforeAll hook, right after you initialize the ORM.

另一种方法是使用 Seeder,这是一个 ORM 包(可通过 @mikro-orm/seeder 获得),它提供实用程序来用(不一定)虚假数据填充我们的数据库。

¥One alternative to that is using the Seeder, an ORM package (available via @mikro-orm/seeder), which offers utilities to populate our database with (not necessarily) fake data.

我们将使用 Seeder 用虚假数据填充测试数据库,但使用种子程序为生产数据库创建初始数据也是一种有效的方法 - 我们可以通过这种方式创建默认的文章标签集,或者初始管理员用户。你可以设置种子层次结构或逐个调用它们。

¥We will be using Seeder for populating the test database with fake data, but it is a valid approach to have a seeder that creates initial data for a production database too - we could create the default set of article tags this way, or the initial admin user. You can set up a hierarchy of seeders or call them one by one.

让我们安装种子包并使用 CLI 生成我们的测试种子:

¥Let's install the seeder package and use the CLI to generate our test seeder:

npm install @mikro-orm/seeder

下一步将在你的 ORM 配置中注册 SeedManager 扩展,这将使其通过 orm.seeder 属性可用:

¥Next step will be to register the SeedManager extension in your ORM config, this will make it available via orm.seeder property:

import { defineConfig } from '@mikro-orm/sqlite';
import { SeedManager } from '@mikro-orm/seeder';

export default defineConfig({
// ...
extensions: [SeedManager],
});

你可以使用的其他扩展是 SchemaGeneratorMigratorEntityGeneratorSchemaGenerator(以及 MongoSchemaGenerator)会自动注册,因为它不需要安装任何第三方依赖。

¥Other extensions you can use are SchemaGenerator, Migrator and EntityGenerator. The SchemaGenerator (as well as MongoSchemaGenerator) is registered automatically as it does not require any 3rd party dependencies to be installed.

现在让我们尝试创建一个名为 test 的新种子:

¥Now let's try to create a new seeder named test:

npx mikro-orm-esm seeder:create test

这将创建 src/seeders 目录和其中的 TestSeeder.ts 文件,其中包含新种子的骨架:

¥This will create src/seeders directory and a TestSeeder.ts file inside it, with a skeleton of your new seeder:

TestSeeder.ts
import type { EntityManager } from '@mikro-orm/core';
import { Seeder } from '@mikro-orm/seeder';

export class TestSeeder extends Seeder {

async run(em: EntityManager): Promise<void> {}

}

我们可以使用前面描述的 em.create() 函数。它在返回创建的实体之前有效地调用 em.persist(entity),因此你甚至不需要对实体本身做任何事情,单独调用 em.create() 就足够了。是时候测试它了!

¥We can use the em.create() function we described earlier. It effectively calls em.persist(entity) before it returns the created entity, so you don't even need to do anything with the entity itself, calling em.create() on its own will be enough. Time to test it!

TestSeeder.ts
export class TestSeeder extends Seeder {

async run(em: EntityManager): Promise<void> {
em.create(User, {
fullName: 'Foo Bar',
email: 'foo@bar.com',
password: 'password123',
articles: [
{
title: 'title 1/3',
description: 'desc 1/3',
text: 'text text text 1/3',
tags: [{ id: 1, name: 'foo1' }, { id: 2, name: 'foo2' }],
},
{
title: 'title 2/3',
description: 'desc 2/3',
text: 'text text text 2/3',
tags: [{ id: 2, name: 'foo2' }],
},
{
title: 'title 3/3',
description: 'desc 3/3',
text: 'text text text 3/3',
tags: [{ id: 2, name: 'foo2' }, { id: 3, name: 'foo3' }],
},
],
});
}

}

然后你需要运行 TestSeeder,让我们在调用 orm.schema.createSchema() 之后立即在你的 initTestApp 助手中执行此操作:

¥Then you need to run the TestSeeder, let's do that in your initTestApp helper, right after we call orm.schema.createSchema():

utils.ts
await orm.schema.createSchema();
await orm.seeder.seed(TestSeeder);

并调整测试断言,因为我们现在在 feed 中获得了 3 篇文章:

¥And adjust the test assertion, as we now get 3 articles in the feed:

article.test.ts
expect(res.json()).toMatchObject({
items: [
{ author: 1, slug: 'title-13', title: 'title 1/3' },
{ author: 1, slug: 'title-23', title: 'title 2/3' },
{ author: 1, slug: 'title-33', title: 'title 3/3' },
],
total: 3,
});

现在运行 npm test 以验证一切是否按预期工作。

¥Now run npm test to verify things work as expected.

现在这应该足够了,但别担心,我们稍后会回到这个话题。

¥That should be enough for now, but don't you worry, we will get back to this topic later on.

SchemaGenerator

在本指南的前面,当我们需要创建数据库进行测试时,我们使用 SchemaGenerator 重新创建我们的数据库。让我们再多谈谈这个类。

¥Earlier in the guide, when we needed to create the database for testing, we used the SchemaGenerator to recreate our database. Let's talk a bit more about this class.

SchemaGenerator 负责根据你的实体元数据生成 SQL 查询。换句话说,它将实体定义转换为数据定义语言 (DDL)。此外,它还可以理解你当前的数据库模式并将其与元数据进行比较,从而产生使模式同步所需的查询。

¥SchemaGenerator is responsible for generating the SQL queries based on your entity metadata. In other words, it translates the entity definition into the Data Definition Language (DDL). Moreover, it can also understand your current database schema and compare it with the metadata, resulting in queries needed to put your schema in sync.

它可以以编程方式使用:

¥It can be used programmatically:

// to get the queries
const diff = await orm.schema.getUpdateSchemaSQL();
console.log(diff);

// or to run the queries
await orm.schema.updateSchema();

使用 orm.schema.updateSchema(),你可以轻松设置与 TypeORM 通过 synchronize: true 相同的行为,只需在 ORM 初始化后立即将其放入你的应用中(或放入某些应用引导代码中)。请记住,这种方法可能具有破坏性,不鼓励使用 - 在运行 SchemaGenerator 之前,你应该始终验证它们生成的查询!

¥With the orm.schema.updateSchema() you could easily set up the same behavior as TypeORM has via synchronize: true, just put that into your app right after the ORM gets initialized (or into some app bootstrap code). Keep in mind this approach can be destructive and is discouraged - you should always verify what queries the SchemaGenerator produced before you run them!

或通过 CLI:

¥Or via CLI:

要运行查询,请将 --dump 替换为 --run

¥To run the queries, replace --dump with --run.

npx mikro-orm-esm schema:create --dump  # Dumps create schema SQL
npx mikro-orm-esm schema:update --dump # Dumps update schema SQL
npx mikro-orm-esm schema:drop --dump # Dumps drop schema SQL

你的生产数据库(项目根目录中的 sqlite.db 文件中的数据库)可能不同步,因为我们在测试中主要使用内存数据库。让我们尝试通过 CLI 同步它。首先,使用 --dump(或 -d)标志运行它以查看它生成了哪些查询,然后通过 --run(或 -r)运行它们:

¥Your production database (the one in sqlite.db file in the root of your project) is probably out of sync, as we were mostly using the in-memory database inside the tests. Let's try to sync it via the CLI. First, run it with the --dump (or -d) flag to see what queries it generates, then run them via --run (or -r):

# first check what gets generated
npx mikro-orm-esm schema:update --dump

# and when its fine, sync the schema
npx mikro-orm-esm schema:update --run

如果此命令不起作用并产生一些无效查询,你可以随时通过首先调用 schema:drop --run 从头开始​​重新创建模式。

¥If this command does not work and produces some invalid queries, you can always recreate the schema from scratch, by first calling schema:drop --run.

在对初始应用进行原型设计时,或者尤其是在测试时,使用 SchemaGenerator 会很方便,你可能希望拥有许多具有最新架构的数据库,无论你的生产模式如何。但请注意,在实际生产数据库上使用时可能非常危险。幸运的是,我们有一个解决方案 - 迁移。

¥Working with SchemaGenerator can be handy when prototyping the initial app, or especially when testing, where you might want to have many databases with the latest schema, regardless of how your production schema looks like. But beware, it can be very dangerous when used on a real production database. Luckily, we have a solution for that - the migrations.

迁移

¥Migrations

要使用迁移,你首先需要为 SQL 驱动程序安装 @mikro-orm/migrations 包(或为 MongoDB 安装 @mikro-orm/migrations-mongodb),并在 ORM 配置中注册 Migrator 扩展。

¥To use migrations you first need to install @mikro-orm/migrations package for SQL drivers (or @mikro-orm/migrations-mongodb for MongoDB), and register the Migrator extension in your ORM config.

MikroORM 通过 umzug 集成了对迁移的支持。它允许你生成具有当前模式差异的迁移,以及管理它们的执行。默认情况下,每个迁移都将在一个事务中执行,并且所有迁移都将封装在一个主事务中,因此如果其中一个失败,则所有内容都将回滚。

¥MikroORM has integrated support for migrations via umzug. It allows you to generate migrations with current schema differences, as well as manage their execution. By default, each migration will be executed inside a transaction, and all of them will be wrapped in one master transaction, so if one of them fails, everything will be rolled back.

让我们安装迁移包并尝试创建你的第一个迁移:

¥Let's install the migrations package and try to create your first migration:

npm install @mikro-orm/migrations

然后在你的 ORM 配置中注册 Migrator 扩展:

¥Then register the Migrator extension in your ORM config:

import { defineConfig } from '@mikro-orm/sqlite';
import { SeedManager } from '@mikro-orm/seeder';
import { Migrator } from '@mikro-orm/migrations';

export default defineConfig({
// ...
extensions: [SeedManager, Migrator],
});

最后尝试创建你的第一个迁移:

¥And finally try to create your first migration:

npx mikro-orm-esm migration:create

如果你严格遵循指南,你应该会看到此消息:

¥If you followed the guide closely, you should see this message:

No changes required, schema is up-to-date

这是因为你刚刚通过调用 npx mikro-orm-esm schema:update --run 同步了模式。你在这里有两个选项,先删除架构,或者选择一个破坏性较小的架构 - 初始迁移。

¥That is because you just synchronized the schema by called npx mikro-orm-esm schema:update --run a moment ago. You have two options here, drop the schema first, or a less destructive one - an initial migration.

初始迁移

¥Initial migration

如果你想开始使用迁移,并且你已经生成了模式,--initial 标志将有助于保留现有模式,同时仅基于实体元数据生成第一个迁移。仅当架构为空或完全最新时才可以使用它。如果你的模式已经存在,则生成的迁移将自动标记为已执行 - 如果不能,你将需要像任何其他迁移一样通过 npx mikro-orm-esm migration:up 手动执行它。

¥If you want to start using migrations, and you already have the schema generated, the --initial flag will help with keeping the existing schema, while generating the first migration based only on the entity metadata. It can be used only if the schema is empty or fully up-to-date. The generated migration will be automatically marked as executed if your schema already exists - if not, you will need to execute it manually as any other migration, via npx mikro-orm-esm migration:up.

仅当之前没有生成或执行迁移时,才能创建初始迁移。如果你是从头开始,并且还没有模式,则不需要使用 --inital 标志,常规迁移也可以完成这项工作。

¥Initial migration can be created only if there are no migrations previously generated or executed. If you are starting fresh, and you have no schema yet, you don't need to use the --inital flag, a regular migration will do the job too.

npx mikro-orm migration:create --initial

这将在 src/migrations 目录中创建初始迁移,其中包含来自 schema:create 命令的查询。迁移将自动标记为已执行,因为我们的模式已经同步。

¥This will create the initial migration in the src/migrations directory, containing queries from schema:create command. The migration will be automatically marked as executed because our schema was already in sync.

迁移类

¥Migration class

让我们看一下生成的迁移。你可以看到有一个类从 @mikro-orm/migrations 包扩展了 Migration 抽象类:

¥Let's take a look at the generated migration. You can see there is a class that extends the Migration abstract class from the @mikro-orm/migrations package:

Migration20220913202829.ts
import { Migration } from '@mikro-orm/migrations';

export class Migration20220913202829 extends Migration {

async up(): Promise<void> {
this.addSql('create table `tag` (`id` integer not null primary key autoincrement, `created_at` datetime not null, `updated_at` datetime not null, `name` text not null);');
// ...
}

}

为了支持撤消更改,你可以实现 down 方法,该方法默认会引发错误。

¥To support undoing those changed, you can implement the down method, which throws an error by default.

向下迁移和 SQLite

MikroORM 将自动生成向下迁移(但出于安全考虑,不会生成初始迁移),但有一个例外 - SQLite 驱动程序,由于其功能有限。如果你使用任何其他驱动程序,将生成向下迁移(除非它是初始迁移)。

¥MikroORM will generate the down migrations automatically (although not for the initial migration, for security concerns), with one exception - the SQLite driver, due to its limited capabilities. If you use any other driver, a down migration will be generated (unless it's an initial migration).

你还可以通过 this.execute('...')up()/down() 方法内执行查询,它将在与迁移的其余部分相同的事务中运行查询。this.addSql('...) 方法还接受 knex 的实例。可以通过 this.getKnex() 访问 Knex 实例;

¥You can also execute queries inside the up()/down() method via this.execute('...'), which will run queries in the same transaction as the rest of the migration. The this.addSql('...) method also accepts instances of knex. Knex instance can be accessed via this.getKnex();

文档 中阅读有关迁移的更多信息。

¥Read more about migrations in the documentation.

再多一个实体

¥One more entity

迁移已设置,让我们通过添加一个实体来测试它们 - Comment 再次属于文章模块,因此进入 src/modules/article/comment.entity.ts

¥The migrations are set up, let's test them by adding one more entity - the Comment, again belonging to the article module, so into src/modules/article/comment.entity.ts.

comment.entity.ts
import { Entity, ManyToOne, Property } from '@mikro-orm/core';
import { Article } from './article.entity.js';
import { User } from '../user/user.entity.js';
import { BaseEntity } from '../common/base.entity.js';

@Entity()
export class Comment extends BaseEntity {

@Property({ length: 1000 })
text!: string;

@ManyToOne()
article!: Article;

@ManyToOne()
author!: User;

}

以及 Article 实体中的 OneToMany 反向侧:

¥and a OneToMany inverse side in Article entity:

@OneToMany({ mappedBy: 'article', eager: true, orphanRemoval: true })
comments = new Collection<Comment>(this);

不要忘记将存储库也添加到我们的简单 DI 容器中:

¥Don't forget to add the repository to our simple DI container too:

export interface Services {
orm: MikroORM;
em: EntityManager;
user: UserRepository;
article: EntityRepository<Article>;
comment: EntityRepository<Comment>;
tag: EntityRepository<Tag>;
}

export async function initORM(options?: Options): Promise<Services> {
// ...

return cache = {
orm,
em: orm.em,
user: orm.em.getRepository(User),
article: orm.em.getRepository(Article),
comment: orm.em.getRepository(Comment),
tag: orm.em.getRepository(Tag),
};
}

我们在这里使用两个新选项,eagerorphanRemoval

¥We are using two new options here, eager and orphanRemoval:

  • eager: true 将自动填充此关系,就像你明确使用 populate: ['comments'] 一样。

    ¥eager: true will automatically populate this relation, just like if you would use populate: ['comments'] explicitly.

  • orphanRemoval: true 是一种特殊的级联类型,从此类集合中删除的任何实体都将从数据库中删除,而不是仅仅从关系中分离(通过将外键设置为 null)。

    ¥orphanRemoval: true is a special type of cascading, any entity removed from such collection will be deleted from the database, as opposed to being just detached from the relationship (by setting the foreign key to null).

现在通过 CLI 创建迁移并运行它。为了测试,也尝试其他与迁移相关的命令:

¥Now create the migration via CLI and run it. And just for the sake of testing, also try the other migration-related commands:

# create new migration based on the schema difference
npx mikro-orm-esm migration:create

# list pending migrations
npx mikro-orm-esm migration:pending

# run the pending migrations
npx mikro-orm-esm migration:up

# list executed migrations
npx mikro-orm-esm migration:list

你应该看到类似于此的输出:

¥You should see output similar to this:

npx mikro-orm-esm migration:create
Migration20220913205718.ts successfully created
npx mikro-orm-esm migration:pending

┌─────────────────────────┐
│ Name │
├─────────────────────────┤
│ Migration20220913205718 │
└─────────────────────────┘
npx mikro-orm-esm migration:up

Processing 'Migration20220913205718'
Applied 'Migration20220913205718'
Successfully migrated up to the latest version
npx mikro-orm-esm migration:list

┌─────────────────────────┬──────────────────────────┐
│ Name │ Executed at │
├─────────────────────────┼──────────────────────────┤
│ Migration20220913202829 │ 2022-09-13T18:57:12.000Z │
│ Migration20220913205718 │ 2022-09-13T18:57:27.000Z │
└─────────────────────────┴──────────────────────────┘
迁移快照

创建新迁移将自动将目标模式快照保存到迁移文件夹中。如果你尝试创建新的迁移,而不是使用当前数据库模式,则将使用此快照。这意味着如果你在运行待处理的迁移之前尝试创建新的迁移,你仍然可以获得正确的架构差异。

¥Creating new migration will automatically save the target schema snapshot into the migrations folder. This snapshot will be then used if you try to create a new migration, instead of using the current database schema. This means that if you try to create new migration before you run the pending ones, you still get the right schema diff.

快照应该像常规迁移文件一样进行版本控制。

¥Snapshots should be versioned just like the regular migration files.

可以通过 ORM 配置中的 migrations.snapshot: false 禁用快照。

¥Snapshotting can be disabled via migrations.snapshot: false in the ORM config.

自动运行迁移

¥Running migrations automatically

在我们结束一天的工作之前,让我们稍微自动运行迁移 - 我们可以以编程方式使用 Migrator,类似于 SchemaGenerator。我们希望在应用启动周期内运行它们,在它开始接受连接之前,因此一个好地方是我们的 bootstrap 函数,就在我们初始化 ORM 之后。

¥Before we call it a day, let's automate running the migrations a bit - we can use the Migrator programmatically, in a similar way like the SchemaGenerator. We want to run them during our app bootstrap cycle, before it starts to accept connections, so a good place for that is our bootstrap function, right after we initialize the ORM.

app.ts
export async function bootstrap(port = 3001, migrate = true) {
const db = await initORM();

if (migrate) {
// sync the schema
await db.orm.migrator.up();
}

// ...
}

我们需要有条件地执行此操作,因为我们只想为生产数据库运行迁移,而不是为我们的测试数据库运行迁移(因为它们直接使用 SchemaGenerator,与 Seeder 一起使用)。从我们的测试用例调用 bootstrap() 函数时不要忘记传递 false

¥We need to do this conditionally, as we want to run the migrations only for the production database, not for our testing ones (as they use the SchemaGenerator directly, together with the Seeder). Don't forget to pass false when calling the bootstrap() function from our test case:

utils.ts
export async function initTestApp(port: number) {
const { orm } = await initORM({ ... });

await orm.schema.createSchema();
await orm.seeder.seed(TestSeeder);

const { app } = await bootstrap(port, false); // <-- here

return app;
}

⛳ 检查点 3

¥⛳ Checkpoint 3

我们现在有 4 个实体,一个具有单个获取端点的工作 Web 应用和一个基本测试用例。我们还设置了迁移和播种。这是我们现在的 app.ts

¥We now have 4 entities, a working web app with a single get endpoint and a basic test case for it. We also set up migrations and seeding. This is our app.ts right now:

由于 ts-node 中 ESM 支持的工作方式,无法在 StackBlitz 项目中使用它 - 我们需要改用 node --loader。我们还使用内存数据库,SQLite 功能可通过特殊数据库名称 :memory: 获得。

¥Due to the nature of how the ESM support in ts-node works, it is not possible to use it inside StackBlitz project - we need to use node --loader instead. We also use in-memory database, SQLite feature available via special database name :memory:.

这是我们本章之后的 app.ts file

¥This is our app.ts file after this chapter: