Skip to main content
Version: 6.4

模式第一指南

尽管 MikroORM 主要是一个 "代码优先" ORM,但它也可以用于 "架构优先" 方法。

¥Although MikroORM is primarily a "code first" ORM, it can also be used in a "schema first" approach.

"代码优先" 与 "Schema 优先"

¥"Code first" vs "Schema first"

顾名思义,在 "代码优先" 方法中,你首先编写实体定义,然后从该代码中生成架构定义(使用架构生成器)。作为最后一步,你将迁移语句执行到数据库服务器。在 "架构优先" 方法中,你首先编写模式定义(或者在迁移的情况下,首先编写迁移),执行它,然后从数据库中生成实体定义(使用实体生成器)。

¥As the names suggest, in a "code first" approach, you write the entity definitions first, and generate the schema definition out of that code (using the schema generator). As a last step, you execute the migration statements to the database server. In a "schema first" approach, you write the schema definition first (or in the case of migrations, write the migrations first), execute it, and generate the entity definitions out of the database (using the entity generator).

这两种方法都有一些优点和缺点。

¥Both approaches have some benefits and drawbacks.

代码优先:

¥Code first:

  • ✅ 无需熟悉用于定义表、列、索引和外键的所有 SQL 选项。(但如果你愿意,它会有所帮助)

    ¥✅ No need to get familiar with all SQL options for defining tables, columns, indexes and foreign keys. (It helps if you are though)

  • ✅ 易于在不同的数据库引擎之间移植(直到你选择使用特定于引擎的功能)

    ¥✅ Easy to port between different database engines (until you opt into engine-specific features)

  • ✅ 重命名表和列以及添加和删除它们很简单……只要你每次迁移每个实体只执行这三件事中的一件。

    ¥✅ It is trivial to rename tables and columns, as well as add and remove them... As long as you only do one of those three things per entity per migration.

  • ❌ 如果你不小心,并且一次对一个实体进行多次更改,则数据库迁移可能会导致数据丢失。需要仔细手动检查生成的迁移以避免这种情况。

    ¥❌ If you aren’t careful, and do multiple changes to one entity in one go, database migrations can cause data loss. Careful manual review of generated migrations is needed to avoid this.

  • ❌ 性能可能不是最理想的,因为许多数据库功能都是 "眼不见,心不烦"。

    ¥❌ Performance may be suboptimal, as many database features are "out of sight, out of mind".

  • ❌ 你可能会遗漏可能的 M:N 和 1:N 关系,而这些关系反过来会使你的应用逻辑更简单。

    ¥❌ You may be missing out on possible M:N and 1:N relations that would in turn make your application logic simpler.

  • ❌ 很难移植到不同的 ORM 和从不同的 ORM 移植,因为很多时候,名称相同的功能实际上可能工作方式不同,相反,相同的功能可能名称不同。

    ¥❌ Hard to port to and from a different ORM, as often times, features that are named the same may actually work differently, and conversely, the same features may be named differently.

Schema 优先:

¥Schema first:

  • ✅ 无需熟悉 ORM 的选项。(但如果你愿意,它会有所帮助)

    ¥✅ No need to get familiar with the ORM's options. (It helps if you are though)

  • ✅ 易于移植到不同的 ORM(包括同一 ORM 的不同版本),即使该 ORM 使用另一种语言。

    ¥✅ Easy to port to and from a different ORM (including different versions of the same ORM), even if that ORM is in another language.

  • ✅ 如果你熟悉 SQL,则可以轻松添加新表、列和关系,同时让实体定义完全了解所有可能的链接,并确保数据安全。

    ¥✅ If you’re comfortable with SQL, it is trivial to add new tables, columns and relations, while keeping the entity definitions fully aware of all possible links, and your data safe.

  • ❌ 重命名有点复杂,因为重新生成的实体不是代码其余部分的一部分。

    ¥❌ Renames are a bit more involved, because the regenerated entities aren’t part of the rest of your code.

  • ❌ 足够复杂的模式最终可能会触发实体生成中的错误,然后你需要以某种方式修补这些错误,然后才能构建应用。

    ¥❌ Sufficiently complex schemas can end up triggering bugs in entity generation, that you then need to patch in some way, before your application can even build.

  • ❌ 你可能会遗漏 ORM 中使应用逻辑更简单的好东西。

    ¥❌ You may be missing out on goodies from the ORM that make application logic simpler.

  • ❌ 可能更难移植到不同的数据库引擎或从不同的数据库引擎移植,因为即使是相对 "simple" 数据库模式也可​​能最终需要数据库特定的功能,实体生成器将在 ORM 支持的情况下包括这些功能。如果充分利用数据库的潜力,这样的迁移将更具挑战性。

    ¥❌ Likely harder to port to and from a different database engine, as even relatively "simple" database schemas are likely to end up needing database-specific features, which the entity generator will include where supported by the ORM. If using the database to its full potential, such a migration would be even more challenging.

我们在构建什么?

¥What are we building?

在本指南的其余部分,我们将在首先创建数据库模式后构建应用。

¥In the rest of this guide, we will be building an application after first having made the database schema.

我们将以你可能已经按照 "代码优先" 指南 创建的相同应用结束,但再次从头开始重新创建它。事先阅读该指南并非严格要求,但我们将多次引用它作为比较点。

¥We'll end with the same application that you may have already created by following the "code first" guide, but re-create it from scratch again. Reading that guide beforehand is not strictly required, but we will make several references back to it as a point of comparison.

要查看我们将要构建的最终项目,请尝试克隆 mikro-orm/schema-first-guide GitHub 项目

¥To take a peek at the final project we will be building, try cloning the mikro-orm/schema-first-guide GitHub project.

git clone https://github.com/mikro-orm/schema-first-guide.git

我们将为这个项目使用 MySQL。其他数据库引擎遵循相同的过程。我们还假设你已经在本地安装了 MySQL 本身,并且可以通过用户名 "root" 连接到它,无需密码。

¥We will use MySQL for this project. Other database engines follow the same process. We are also assuming you already have MySQL itself installed locally and can connect to it via the username "root" and no password.

通常,如果你从头开始构建应用(而不是迁移现有应用),则可以使用 GUI 工具(例如,对于 MySQL,这包括 MySQL Workbench)使此部分过程更容易。

¥In general, if you're building an application from scratch (as opposed to migrating an existing application), you can use GUI tools (e.g. In the case of MySQL, this includes MySQL Workbench) to make this part of the process easier.

这是我们的初始应用(在以后的迁移之前)的 MySQL DDL,由 DB 创建工具(在本例中为 MySQL Workbench Forward Engineering)转储:

¥Here's the MySQL DDL of our initial application (before later migrations), as dumped by a DB creation tool (in this case, MySQL Workbench Forward Engineering):

schema.sql
-- MySQL Workbench Forward Engineering

SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION';

-- -----------------------------------------------------
-- Schema blog
-- -----------------------------------------------------

-- -----------------------------------------------------
-- Schema blog
-- -----------------------------------------------------
CREATE SCHEMA IF NOT EXISTS `blog` DEFAULT CHARACTER SET utf8 ;
USE `blog` ;

-- -----------------------------------------------------
-- Table `user`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `user` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`created_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`full_name` VARCHAR(255) NOT NULL,
`email` VARCHAR(255) NOT NULL,
`password` VARCHAR(255) NOT NULL,
`bio` TEXT NOT NULL,
PRIMARY KEY (`id`))
ENGINE = InnoDB;


-- -----------------------------------------------------
-- Table `article`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `article` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`created_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`slug` VARCHAR(255) NOT NULL,
`title` VARCHAR(255) NOT NULL,
`description` VARCHAR(1000) NOT NULL,
`text` TEXT NOT NULL,
`author` INT UNSIGNED NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `slug_UNIQUE` (`slug` ASC) VISIBLE,
INDEX `fk_article_user1_idx` (`author` ASC) VISIBLE,
CONSTRAINT `fk_article_user1`
FOREIGN KEY (`author`)
REFERENCES `user` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;


-- -----------------------------------------------------
-- Table `comment`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `comment` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`created_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`text` VARCHAR(1000) NOT NULL,
`article` INT UNSIGNED NOT NULL,
`author` INT UNSIGNED NOT NULL,
PRIMARY KEY (`id`),
INDEX `fk_comment_article1_idx` (`article` ASC) VISIBLE,
INDEX `fk_comment_user1_idx` (`author` ASC) VISIBLE,
CONSTRAINT `fk_comment_article1`
FOREIGN KEY (`article`)
REFERENCES `article` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_comment_user1`
FOREIGN KEY (`author`)
REFERENCES `user` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;


-- -----------------------------------------------------
-- Table `tag`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `tag` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`created_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`name` VARCHAR(20) NOT NULL,
PRIMARY KEY (`id`))
ENGINE = InnoDB;


-- -----------------------------------------------------
-- Table `article_tag`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `article_tag` (
`article_id` INT UNSIGNED NOT NULL,
`tag_id` INT UNSIGNED NOT NULL,
PRIMARY KEY (`article_id`, `tag_id`),
INDEX `fk_article_tag_tag1_idx` (`tag_id` ASC) VISIBLE,
CONSTRAINT `fk_article_tag_article1`
FOREIGN KEY (`article_id`)
REFERENCES `article` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_article_tag_tag1`
FOREIGN KEY (`tag_id`)
REFERENCES `tag` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;


SET SQL_MODE=@OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;

但是我们可以将其放在初始迁移文件中,以使我们的应用也可以在空白 MySQL 服务器上工作。

¥But we can place this in an initial migration file, to make our application work on blank MySQL servers as well.

我们将使用 MikroORM 的迁移器来运行我们的迁移,包括初始迁移。如果你正在将现有应用迁移到 MikroORM,则可以继续在现有设置中进行迁移,并在每次迁移时重新生成实体。完全删除旧应用后,你可以在 MikroORM 中生成初始迁移。

¥We will use MikroORM's migrator to run our migrations, including the initial one. If you were migrating an existing application to MikroORM, you can instead keep doing the migrations in your existing setup, and regenerate your entities on every migration. Once you fully drop your old application, you can generate an initial migration in MikroORM.

项目设置

¥Project setup

安装

¥Install

我们将使用与 "代码优先" 指南 类似的设置。

¥We will use a similar setup to the "code first" guide.

创建一个文件夹并进入其中:

¥Create a folder and cd into it:

mkdir blog-api && cd blog-api

初始化项目:

¥Init the project:

npm init

安装以下内容:

¥Install the following:

npm install @mikro-orm/core \
@mikro-orm/mysql \
@mikro-orm/migrations \
fastify

和一些开发依赖

¥and some dev dependencies

npm install --save-dev @mikro-orm/cli \
@mikro-orm/entity-generator \
typescript \
ts-node \
@types/node \
rimraf \
vitest

ECMAScript 模块

¥ECMAScript Modules

如 "代码优先" 指南中所述,我们将使用 ECMAScript 模块。确保你有

¥Just as in the "code first" guide, we'll be using ECMAScript Modules. Make sure you have

package.json
{
"type": "module",
...
}

在你的 package.json 文件中。

¥in your package.json file.

请注意,我们不必使用 ECMAScript 模块。MikroORM 还支持 CommonJS。我们将其用于指南,因为我们正在创建一个新项目,我们可以在其中使用它,因为我们所有的依赖都已为 ECMAScript 模块做好准备。

¥Note that we don't have to use ECMAScript Modules. MikroORM also supports CommonJS. We are using it for the guides, because we are making a new project, in which we can use it, as all of our dependencies are ready for ECMAScript Modules.

配置 TypeScript

¥Configuring TypeScript

我们将使用几乎相同的配置 如 "代码优先" 指南中所述。正如那里提到的,如果你知道自己在做什么,请调整此配置。

¥We will use almost the same config as the "code first" guide one. As mentioned there, already, adjust this config if you know what you’re doing.

我们将包含 ts-node 配置并添加 emitDecoratorMetadata,因为我们将使用默认元数据提供程序,这需要我们的 TypeScript 配置。

¥We'll include the ts-node config, and add emitDecoratorMetadata, because we'll be using the default metadata provider, which requires that of our TypeScript config.

tsconfig.json
{
"compilerOptions": {
"module": "NodeNext",
"moduleResolution": "NodeNext",
"target": "ES2022",
"strict": true,
"outDir": "dist",
"declaration": true,
"emitDecoratorMetadata": true,
"experimentalDecorators": true
},
"include": [
"./src/**/*.ts"
],
"ts-node": {
"esm": true,
"transpileOnly": true
}
}

配置 CLI

¥Configuring the CLI

配置 MikroORM CLI 工具对于 "架构优先" 方法至关重要。我们需要迁移器来运行我们的迁移,以及实体生成器来从架构状态中创建我们的实体。

¥Configuring the MikroORM CLI tools is essential for the "schema first" approach. We need the migrator to run our migrations, as well as the entity generator to create our entities out of the schema state.

以下是我们将要开始的基本配置(稍后将进行扩展以充分利用实体生成器的功能):

¥Here's a basic config we'll start with (and later extend to take full advantage of the entity generator's features):

src/mikro-orm.config.ts
import { defineConfig } from '@mikro-orm/mysql';
import { EntityGenerator } from '@mikro-orm/entity-generator';
import { Migrator } from '@mikro-orm/migrations';

export default defineConfig({
multipleStatements: true,
extensions: [EntityGenerator, Migrator],
discovery: {
// we need to disable validation for no entities, due to the entity generation
warnWhenNoEntities: false,
},
entities: ['dist/**/*.entity.js'],
entitiesTs: ['src/**/*.entity.ts'],
host: 'localhost',
user: 'root',
password: '',
dbName: 'blog',
// enable debug mode to log SQL queries and discovery information
debug: true,
migrations: {
path: 'dist/migrations',
pathTs: 'src/migrations',
},
entityGenerator: {
save: true,
path: 'src/modules',
esmImport: true,
readOnlyPivotTables: true,
outputPurePivotTables: true,
bidirectionalRelations: true,
customBaseEntityName: 'Base',
useCoreBaseEntity: true,
},
});

你还可以添加到你的 package.json

¥And you can also add to your package.json

package.json
{
"mikro-orm": {
"useTsNode": true
}
}

或者,将环境变量 MIKRO_ORM_CLI_USE_TS_NODE 设置为非空值。

¥Or alternatively, set the environment variable MIKRO_ORM_CLI_USE_TS_NODE to a non-empty value.

为了使示例保持简单,我们将所有配置都放在一个配置文件中,但你可以将配置拆分为共享配置和工具特定配置。在这种情况下,你还需要在运行正确的工具时提供正确的配置文件。你需要将这些调用封装在为你执行此操作的 package.json 脚本中。

¥To keep the example simple, we're having all of our configuration in a single config file, but you may split your config into a shared config and a tool specific config. In that case, you will want to also supply the correct config file for the correct tool upon running it. You will want to wrap those calls in package.json scripts that do that for you.

生成初始实体

¥Generating initial entities

我们将首先生成并运行初始迁移以生成实体。我们需要添加 "--blank" 选项,以使迁移生成器能够接受我们目前没有任何实体的情况。

¥We'll first generate and run an initial migration to generate entities out of. We'll need to add the "--blank" option to make it ok for the migration generator that we don't currently have any entities.

运行

¥Run

npx mikro-orm-esm migration:create --initial --blank

让我们编辑它以包含架构的内容:

¥And let's edit it to include the contents of the schema:

migrations/Migration00000000000000.ts
import { Migration } from '@mikro-orm/migrations';

export class Migration00000000000000 extends Migration {

async up(): Promise<void> {
await this.execute(`
CREATE TABLE IF NOT EXISTS \`user\` (
\`id\` INT UNSIGNED NOT NULL AUTO_INCREMENT,
\`created_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
\`updated_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
\`full_name\` VARCHAR(255) NOT NULL,
\`email\` VARCHAR(255) NOT NULL,
\`password\` VARCHAR(255) NOT NULL,
\`bio\` TEXT NOT NULL,
PRIMARY KEY (\`id\`))
ENGINE = InnoDB;

CREATE TABLE IF NOT EXISTS \`article\` (
\`id\` INT UNSIGNED NOT NULL AUTO_INCREMENT,
\`created_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
\`updated_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
\`slug\` VARCHAR(255) NOT NULL,
\`title\` VARCHAR(255) NOT NULL,
\`description\` VARCHAR(1000) NOT NULL,
\`text\` TEXT NOT NULL,
\`author\` INT UNSIGNED NOT NULL,
PRIMARY KEY (\`id\`),
UNIQUE INDEX \`slug_UNIQUE\` (\`slug\` ASC) VISIBLE,
INDEX \`fk_article_user1_idx\` (\`author\` ASC) VISIBLE,
CONSTRAINT \`fk_article_user1\`
FOREIGN KEY (\`author\`)
REFERENCES \`user\` (\`id\`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;

CREATE TABLE IF NOT EXISTS \`comment\` (
\`id\` INT UNSIGNED NOT NULL AUTO_INCREMENT,
\`created_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
\`updated_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
\`text\` VARCHAR(1000) NOT NULL,
\`article\` INT UNSIGNED NOT NULL,
\`author\` INT UNSIGNED NOT NULL,
PRIMARY KEY (\`id\`),
INDEX \`fk_comment_article1_idx\` (\`article\` ASC) VISIBLE,
INDEX \`fk_comment_user1_idx\` (\`author\` ASC) VISIBLE,
CONSTRAINT \`fk_comment_article1\`
FOREIGN KEY (\`article\`)
REFERENCES \`article\` (\`id\`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT \`fk_comment_user1\`
FOREIGN KEY (\`author\`)
REFERENCES \`user\` (\`id\`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;

CREATE TABLE IF NOT EXISTS \`tag\` (
\`id\` INT UNSIGNED NOT NULL AUTO_INCREMENT,
\`created_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
\`updated_at\` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
\`name\` VARCHAR(20) NOT NULL,
PRIMARY KEY (\`id\`))
ENGINE = InnoDB;

CREATE TABLE IF NOT EXISTS \`article_tag\` (
\`article_id\` INT UNSIGNED NOT NULL,
\`tag_id\` INT UNSIGNED NOT NULL,
PRIMARY KEY (\`article_id\`, \`tag_id\`),
INDEX \`fk_article_tag_tag1_idx\` (\`tag_id\` ASC) VISIBLE,
CONSTRAINT \`fk_article_tag_article1\`
FOREIGN KEY (\`article_id\`)
REFERENCES \`article\` (\`id\`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT \`fk_article_tag_tag1\`
FOREIGN KEY (\`tag_id\`)
REFERENCES \`tag\` (\`id\`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
`);
}
}

然后使用运行迁移

¥Then run the migration with

npx mikro-orm-esm migration:up

现在,你可以使用以下方式生成初始实体

¥And now, you can generate the initial entities with

npx mikro-orm-esm generate-entities --save

如果到目前为止一切顺利,你应该会看到以下目录结构

¥If all is good to this point, you should be seeing the following directory structure

├── package.json
├── src
│ ├── mikro-orm.config.ts
│ └── modules
│ ├── Article.ts
│ ├── ArticleTag.ts
│ ├── Base.ts
│ ├── Comment.ts
│ ├── Tag.ts
│ └── User.ts
└── tsconfig.json

操作实体文件位置和名称

¥Manipulating entity file locations and names

你可能已经注意到文件没有遵循我们最初配置的 *.entity.ts 后缀。此外,它们都在一个文件夹下。这两者都是因为实体生成器使用的默认名称。我们可以覆盖配置中的 fileName 选项,将文件保存在不同的位置,并添加后缀:

¥You may have noticed that the files aren’t following the *.entity.ts suffix we configured initially. Further, they're all under one folder. Both of these are because of the default names the entity generator uses. We can override the fileName option in the config to save our files in different locations, and add suffixes:

src/mikro-orm.config.ts
import { defineConfig } from '@mikro-orm/mysql';
import { EntityGenerator } from '@mikro-orm/entity-generator';
import { Migrator } from '@mikro-orm/migrations';

export default defineConfig({
// rest of the config
entityGenerator: {
fileName: (entityName) => {
switch (entityName) {
case 'Article':
case 'ArticleTag':
case 'Tag':
case 'Comment':
return `article/${entityName.toLowerCase()}.entity`;
case 'User':
return `user/${entityName.toLowerCase()}.entity`;
default:
return `common/${entityName.toLowerCase()}.entity`;
}
},
// rest of the entity generator config
}
});

如果你首先从模块文件夹中删除所有文件(或者只是删除模块文件夹本身),然后重新运行实体生成器,你现在应该会看到以下目录结构:

¥If you first remove all files from the modules folder (or just remove the modules folder itself), and then re-run the entity generator, you should now instead see the following directory structure:

├── package.json
├── src
│ ├── mikro-orm.config.ts
│ └── modules
│ ├── article
│ │ ├── article.entity.ts
│ │ ├── articletag.entity.ts
│ │ ├── tag.entity.ts
│ │ └── comment.entity.ts
│ ├── common
│ │ └── base.entity.ts
│ └── user
│ └── user.entity.ts
└── tsconfig.json

稍后重新生成实体时,你需要首先删除所有带有后缀 ".entity.ts" 的文件。

¥When re-generating the entities later, you will want to first remove all files with the suffix ".entity.ts".

npx rimraf -g ./src/modules/**/*.entity.ts

因为我们将大量重新生成实体,而这样做需要先删除旧实体,所以我们在 package.json 中为此添加一个脚本:

¥Because we'll be regenerating entities a lot, and doing so requires removal of the old ones first, let's add a script in package.json for that:

package.json
  "scripts": {
"regen": "rimraf -g ./src/modules/**/*.entity.ts && mikro-orm-esm generate-entities --save"
}

现在,你可以调用

¥And now, you can call

npm run regen

使用生成的实体

¥Using the generated entities

因为生成的实体现在与我们的运行时配置相匹配,所以我们可以在应用中初始化 ORM,并且应该会选择它们。

¥Because the generated entities now match our runtime configuration, we can init the ORM in our application, and they should be picked up.

我们将使用与 "代码优先" 指南中类似的方法来组织我们的应用。

¥We're going to use a similar approach for our application organization as the one in the "code first" guide.

具体来说,我们的 DB 封装器:

¥Specifically, our DB wrapper:

src/db.ts
import {
type EntityManager,
type EntityRepository,
MikroORM,
type Options,
} from "@mikro-orm/mysql";
import config from "./mikro-orm.config.js";
import { Article } from "./modules/article/article.entity.js";
import { Tag } from "./modules/article/tag.entity.js";
import { User } from "./modules/user/user.entity.js";
import { Comment } from "./modules/article/comment.entity.js";

export interface Services {
orm: MikroORM;
em: EntityManager;
user: EntityRepository<User>;
article: EntityRepository<Article>;
tag: EntityRepository<Tag>;
comment: EntityRepository<Comment>;
}

let cache: Services;

export async function initORM(options?: Options): Promise<Services> {
if (cache) {
return cache;
}

const orm = await MikroORM.init({
...config,
...options,
});

return (cache = {
orm,
em: orm.em,
user: orm.em.getRepository(User),
article: orm.em.getRepository(Article),
tag: orm.em.getRepository(Tag),
comment: orm.em.getRepository(Comment),
});
}

应用本身:

¥The app itself:

src/app.ts
import { RequestContext } from '@mikro-orm/core';
import { fastify } from 'fastify';
import { initORM } from './db.js';

export async function bootstrap(port = 3001, migrate = true) {
const db = await initORM({
ensureDatabase: { create: false },
});

if (migrate) {
// sync the schema
await db.orm.migrator.up();
}

const app = fastify();

// register request context hook
app.addHook('onRequest', (request, reply, done) => {
RequestContext.create(db.em, done);
});

// shut down the connection when closing the app
app.addHook('onClose', async () => {
await db.orm.close();
});

// register routes here
app.get('/article', async (request) => {
const { limit, offset } = request.query as {
limit?: number;
offset?: number;
};
const [items, total] = await db.article.findAndCount(
{},
{
limit,
offset,
}
);

return { items, total };
});

const url = await app.listen({ port });

return { app, url };
}

服务器入口点:

¥And the server entry point:

src/server.ts
import { bootstrap }  from './app.js';

try {
const { url } = await bootstrap();
console.log(`server started at ${url}`);
} catch (e) {
console.error(e);
}

最后,让我们在 package.json 中添加一个脚本来启动应用,以及一个脚本来检查我们的代码:

¥Finally, let's add a script in package.json to start the application, as well as a script to check our code:

package.json
{
"scripts": {
"check": "tsc --noEmit",
"start": "node --no-warnings=ExperimentalWarning --loader ts-node/esm src/server.ts"
}
}

虽然你不需要在启动应用之前运行检查脚本,但你可能会发现在重大更改后检查错误很方便。

¥While you don’t need to run the check script before starting the application, you may find it convenient to check for errors after significant changes.

⛳ 检查点 1

¥⛳ Checkpoint 1

此时,我们有一个类似于 "代码优先" 指南中的 "检查点 3" 的应用。除非我们手动使用 SQL 查询添加文章,否则应用本身只能列出尚不存在的文章。但是,我们已经定义了我们将使用的所有实体。我们稍后将在生成的实体之上进行一些调整,以有用的方式展示实体生成器的全部功能。但是,你已经可以在应用代码中使用生成的实体 "按原样",并围绕它们编写其余逻辑。

¥At this point, we have an application similar to the one at "Checkpoint 3" of the "code first" guide. The application itself can only list articles, which don’t exist yet, unless we manually add them with SQL queries. However, we already defined all the entities we'll use. We'll later do some tweaks on top of the generated entities to showcase the full extent of the entity generator's features in a useful way. However, you're already at a point where you can use the generated entities "as is" in your application code, and code the remaining logic around them.

你可以通过启动应用并在浏览器中打开 https://localhost:3001/article 来验证应用是否正常工作。

¥You can verify the application is working ok by starting it, and opening https://localhost:3001/article in your browser.

对现有表和列进行更改

¥Making changes to existing tables and columns

鉴于我们应用当前的简单性,我们不必担心兼容性。我们可以运行

¥Given the current simplicity of our application, we don't have to worry about compatibility. We can just run

npx mikro-orm-esm migration:create --blank

要创建新的空迁移,准备我们需要在其中执行的任何 SQL 语句,运行

¥to create a new empty migration, prepare whatever SQL statements we need to perform in it, run

npx mikro-orm-esm migration:up

最后使用

¥and finally re-generate the entities with

npm run regen

一旦你的应用增长到足以使其余代码实际引用单个实体和属性,此流程就会变得有点复杂,这意味着你不能在不考虑这些用法的情况下删除或重命名事物。

¥This flow gets a bit more complex once your application grows enough that the rest of your code actually references individual entities and properties, meaning you can't remove or rename things without considering these usages.

重命名现有表和列

¥Renaming existing tables and columns

当你想要重命名表或列,甚至调整类和属性的名称时,你应该首先在代码中执行此操作。使用你的 IDE 重命名所有用法。对于类名,你还应该重命名文件及其导入。完成此操作后,你可以继续执行其余流程,如上所示 - 创建迁移,在其中在数据库中执行重命名,运行它,然后重新生成实体。尝试在实体重新生成后立即重建应用。旧文件(你手动编辑的)将被删除,但没关系,因为由于迁移,新文件现在将具有正确的名称,可以与应用的其余部分一起使用。实体再生可能会揭示一些也被重命名的关系,因为它们是以你重命名的表/列命名的。在这种情况下,你的应用将无法构建。你需要从版本控制中恢复早期的实体,并重命名受影响的关系,然后再重新生成实体并尝试再次构建。

¥When you would like to rename a table or a column, or even adjust the names of classes and properties, you should do so in code first. Use your IDE to rename all usages. In the case of class names, you should also rename the file and its imports. Once you’ve done so, you can then continue with the rest of the flow as shown above - create a migration in which you do the rename in the database, run it, and regenerate the entities. Try to rebuild your application immediately after entity regeneration. The old files (that you had edited manually) will be removed, but that is ok, because thanks to the migration, the new ones will now have the correct names already working with the rest of your application. It is possible that entity regeneration will reveal some relations which were also renamed, due to being named after the table/column that you renamed. In that event, your application will fail to build. You will want to restore your earlier entities from version control, and rename the affected relations, before regenerating the entities again, and trying to build again.

请注意,对于此类数据库重命名,任何正在运行的应用实例都将中断,因为它将引用现在不存在的名称。在生产中运行时,你将希望避免重命名,而是使用 "展开,然后收缩" 迁移策略。

¥Note that with such database renames, any running application instance will break, since it will be referring to a now non-existent name. When running in production, you will want to avoid renames, and instead use the "expand, then contract" migration strategy.

"展开,然后收缩" 迁移策略

¥"Expand, then contract" migration strategy

"展开,然后收缩" 迁移策略的工作方式是你必须执行以下操作(按此顺序):

¥The way "Expand, then contract" migration strategy works is that you have to do (in this order) the following:

  1. 创建新表/列(作为迁移 + 实体再生)

    ¥Create the new table/column (as a migration + entity regeneration)

  2. 让应用的新版本写入旧的和新的表/列(在与步骤 1 相同的部署中,当且仅当你在运行时还自动执行迁移时;否则,确保在应用运行之前执行步骤 1 中的迁移)

    ¥Make the new version of your app write to both the old and new table/column (in the same deploy as with step 1 if and only if you also execute migrations automatically on run; otherwise, ensure migration from step 1 is executed before app run)

  3. 将旧数据从旧表/列复制到新表/列(在不需要实体再生或应用代码更改的第二次迁移中)

    ¥Copy over old data from the old table/column into the new table/column (in a second migration that doesn't require entity regeneration or application code changes)

  4. 重构旧表/列中的任何读取以使用新表/列(理想情况下是在旧数据已经迁移之后)。

    ¥Refactor any reads from the old table/column to use the new table/column instead (ideally after old data is already migrated).

  5. 确保对旧表/列的任何读取引用都消失后,停止对旧表/列的写入(部署更改的应用代码而不进行相关迁移)。

    ¥After ensuring any read references to the old table/column are gone, stop writing to the old table/column (deploy changed application code without related migrations).

  6. 确保对旧表/列的任何读写引用都消失后,删除旧表/列(作为最终迁移 + 实体再生)。

    ¥After ensuring any read and write references to the old table/column are gone, remove the old table/column (as a final migration + entity regeneration).

从技术上讲,如果你使用 "代码优先" 方法,你也可以应用此策略,事实上,你非常应该这样做。如果在 "代码优先" 方法中不遵循此策略,则可能会导致意外数据丢失(除非你仔细检查生成的迁移)以及停机。如果在 "架构优先" 方法中不遵循此策略,则会导致生产停机,并在开发过程中产生构建错误。

¥Technically, you can also apply this strategy if you are using a "code first" approach, and in fact, you very much should. Failure to follow this strategy in a "code first" approach may lead to accidental data loss (unless you carefully review generated migrations), as well as downtime. Failure to follow this strategy in a "schema first" approach leads to downtime on production, and build errors during development.

命名策略注意事项

¥Naming strategy considerations

表和属性的名称不必与应用代码中的类和属性的名称完全匹配。这是实体生成器默认执行的操作,以尽量减少意外,但你可以覆盖它。

¥The names of your tables and properties don’t have to match exactly the names of classes and properties in your application code. This is what the entity generator does by default to minimize surprises, but you can override this.

让我们让我们的表使用单词的复数形式,而实体类名将是单数。最后,应用代码不需要更改,因为它仍然引用单数词 "article"。

¥Let's make it so that our tables use the plural form of words, while the entity class names will be singular. In the end, the application code will not need changes, because it is still referring to the singular word "article".

首先,让我们添加包 pluralize,以自动进行单数和复数形式之间的转换。

¥First, let's add the package pluralize, to do the transformation between singular and plural forms automatically.

npm install --save-dev pluralize @types/pluralize

接下来,让我们添加迁移以重命名我们的表:

¥Next, let's add a migration to rename our tables:

npx mikro-orm-esm migration:create --blank

在其中,

¥and in it,

migrations/Migration00000000000001.ts
import { Migration } from '@mikro-orm/migrations';

export class Migration00000000000001 extends Migration {
async up(): Promise<void> {
await this.execute(`
RENAME TABLE
\`article\` TO \`articles\`,
\`article_tag\` TO \`article_tags\`,
\`tag\` TO \`tags\`,
\`comment\` TO \`comments\`,
\`user\` TO \`users\`
`);
}
async down(): Promise<void> {
await this.execute(`
RENAME TABLE
\`articles\` TO \`article\`,
\`article_tags\` TO \`article_tag\`,
\`tags\` TO \`tag\`,
\`comments\` TO \`comment\`,
\`users\` TO \`user\`
`);
}
}

如果你现在只运行迁移并重新生成,你将看到具有复数形式的实体。为了使它们保持单数形式,我们可以覆盖 UnderscoreNamingStrategy 的 getEntityName 方法(这是默认命名策略)。

¥If you now just run the migration and regenerate, you will see your entities with plural form. To keep them in singular form, we can override the getEntityName method of the UnderscoreNamingStrategy (which is the default naming strategy).

src/mikro-orm.config.ts
import { UnderscoreNamingStrategy } from "@mikro-orm/core";
import pluralize from 'pluralize';
// rest of imports

export default defineConfig({
// rest of the config
namingStrategy: class extends UnderscoreNamingStrategy {
override getEntityName(tableName: string, schemaName?: string): string {
return pluralize.singular(super.getEntityName(tableName, schemaName));
}
},
entityGenerator: {
// rest of entity generator config
}
});

有了这个补充,如果你现在重新生成实体,类和相应的文件名现在仍将以单数形式呈现,就像以前一样。

¥With this addition, if you regenerate the entities now, the classes, and the respective file names will now still be in singular form, as they were before.

你可能注意到 tableName 选项也已添加到所有实体中。这是因为命名策略中有一个单独的方法 - classToTableName - 关于将类名转换回表名。实体生成器检查此方法是否生成正确的表,如果没有,它会添加 tableName 选项以确保最终使用正确的表。如果你希望自动将单数形式转换为复数,则可以在命名策略中覆盖 classToTableName 方法,从而再次省略 tableName 选项。实体生成器将确保 "pluralize" 所犯的任何错误都可以通过显式 tableName 条目来缓解。或者,你可以将 classToTableName 方法保留为其默认值,并保留 tableName 选项,以使生成的实体代码可以通过表名进行搜索。

¥You may notice that the tableName option is also added to all entities. That is because there is a separate method in the naming strategy - classToTableName - about converting class names back to table names. The entity generator checks if this method produces the correct table, and if not, it adds the tableName option to ensure the correct table is used in the end. You may override the classToTableName method in the naming strategy if you wish to instead convert the singular form to plural automatically, and thus omit the tableName option once again. The entity generator will ensure that any errors made by "pluralize" would be mitigated by an explicit tableName entry. Alternatively, you may keep the classToTableName method to its default, and keep the tableName options around, to make your generated entities code searchable by the table names.

还有 columnNameToProperty 方法,顾名思义,它告诉实体生成器为给定的列名生成什么属性名称。类似地,在发布时,如果队列中有请求,它将被安排在下一个滴答时获取已发布的连接,否则将连接放回池中。如果两者之间不匹配,则选项 fieldNamefieldNames 将填充列的名称。

¥There's also the columnNameToProperty method, which, as the name suggests, tells the entity generator what property name to produce for a given column name. Similarly, there is propertyToColumnName that does the reverse. If there is a mismatch between the two, the options fieldName or fieldNames will be filled with the names of the columns.

向实体添加应用级逻辑

¥Adding application level logic to entities

虽然你可以在 DB 模式级别使用外键关系、检查约束、唯一索引和生成的列做很多事情,但有些事情不能仅由模式决定。同时,在 "架构优先" 方法中,你必须随时保持实体能够重新生成。为了弥合这两个看似相互冲突的目标之间的差距,实体生成器有两个回调,它在实体生成过程中调用它们。在其中,你可以操作实体元数据,这反过来会影响最终生成的代码。你应该尽可能简化这些钩子中的修改,以使你的代码尽可能易于移植。

¥While there is a lot you can do on DB schema level with foreign key relations, check constraints, unique indexes and generated columns, there are some things that can't be determined by the schema alone. At the same time, in a "schema first" approach, you have to keep your entities able to be regenerated at any time. To bridge the gap between these two seemingly conflicting goals, the entity generator has two callbacks that it calls during the entity generation process. In them, you can manipulate the entity metadata, which will in turn influence the generated code in the end. You should keep your modifications during those hooks as simple as possible, to keep your code as portable as possible.

两个配置选项是 onInitialMetadataonProcessedMetadata。第一个端点是在从数据库获取原始元数据后立即执行的,第二个端点是在实体生成器完成它通常从该元数据中自动为你推断的所有内容后运行的。诸如 M:N 关系、关系的反面、基类等。你可以将 onInitialMetadata 视为选择加入额外功能的地方,将 onProcessedMetadata 视为选择退出你原本选择加入的功能的地方。

¥The two configuration options are onInitialMetadata and onProcessedMetadata. The first is made immediately after getting the raw metadata from your database, and the second is run after the entity generator goes through all the things it normally infers automatically for you from that metadata. Things like M:N relations, inverse sides of relations, base class and more. You can think of onInitialMetadata as the place to opt into extra features, and onProcessedMetadata as the place to opt out of features that you were otherwise opted into.

如果你之前已阅读完整个 "代码优先" 指南,现在正在阅读本指南,你可能已经注意到我们在实体定义中缺少一些内容。让我们添加其中一些。

¥If you went through the whole "code first" guide before, and now are going over through this guide, you may have noticed that we are missing a few things in the entity definitions. Let's add some of them.

首先,让我们让 "article" 的 "text" 变得懒惰。此外,让我们将 "password" 也设为懒惰的,并将其设为 "hidden",以避免在响应中意外泄漏它。我们将在 onInitialMetadata 钩子中执行此操作,尽管这些更改也可以在 onProcessedMetadata 中同样完成。

¥First, let's make the "text" of the "article" be lazy. Also, let's make the "password" lazy too, as well as make it "hidden", to avoid accidentally leaking it in responses. We'll do that in the onInitialMetadata hook, though these changes in particular can be done in onProcessedMetadata just the same.

src/mikro-orm.config.ts
// rest of imports

export default defineConfig({
// rest of the config
entityGenerator: {
onInitialMetadata: (metadata, platform) => {
const userEntity = metadata.find(meta => meta.className === 'User');
if (userEntity) {
const passwordProp = userEntity.properties.password;
passwordProp.hidden = true;
passwordProp.lazy = true;
}

const articleEntity = metadata.find(meta => meta.className === 'Article');
if (articleEntity) {
const textProp = articleEntity.properties.text;
textProp.lazy = true;
}
},
// rest of entity generator config
}
});

在处理密码哈希和验证时,我们可以注册全局钩子来处理密码。这与 "代码优先" 指南所做的类似,只是这样做不是在实体上进行,而是在全球范围内进行。实体生成器的当前限制是你无法向实体本身添加钩子。但是,有一个简单的解决方法,你实际上可能会发现它更方便,最终使用起来也不那么神奇 - 自定义类型。也就是说,我们可以为密码定义一个自定义类型对象,这将让我们在写入时自动验证密码并对其进行哈希处理。

¥When it comes to handling the password hashing and verification, we could register global hooks to handle the password. That would be similar to what the "code first" guide does, except doing that would not be doing it at the entity, but globally. A current limitation of the entity generator is that you can't add hooks to the entity itself. However, there is an easy workaround that you may in fact find more convenient and ultimately less magical to work with - custom types. That is, we can define a custom type object for the password, which will let us verify the password and hash it automatically on writes.

让我们添加此类型,并让用户实体的密码属性使用它。我们将使用 argon2 作为哈希函数,因此首先,使用以下方式安装它

¥Let's add this type, and make the password prop of the User entity use it. We'll use argon2 as the hashing function, so first, install it with

npm install argon2

下一步是创建我们的 DB 值将转换为的类和从中转换的类。让我们将其添加到 "users" 模块。我们将使用后缀 "runtimeType" 来明确表示这将被设置为实体的运行时类型。如果需要,我们还会让类型在成功验证后自动重新散列密码。

¥The next step is to create the class that our DB value will be transformed to and from. Let's add it to the "users" module. We'll use the suffix "runtimeType" to make it clear this will be set as the runtime type at an entity. We'll also make the type automatically rehash the password on successful verification if needed.

import { hash, verify, needsRehash, Options } from 'argon2';

const hashOptions: Options = {
hashLength: 100
};

export class Password {
static async fromRaw(raw: string): Promise<Password>
{
return new Password(await hash(raw, hashOptions));
}

static fromHash(hash: string): Password
{
return new Password(hash);
}

#hash: string;

private constructor(hash: string) {
this.#hash = hash;
}

verify(raw: string): Promise<boolean> {
return verify(this.#hash, raw, hashOptions);
}

needsRehash(): boolean
{
return needsRehash(this.#hash, hashOptions);
}

async verifyAndMaybeRehash(raw: string): Promise<boolean> {
const verifyResult = await this.verify(raw);
if (verifyResult && this.needsRehash()) {
this.#hash = await hash(raw, hashOptions);
}
return verifyResult;
}

toString() {
return this.#hash;
}
}

然后添加执行转换的 ORM 自定义类型:

¥and then add the ORM custom type that does the transformation:

src/modules/user/password.type.ts
import { type Platform, type TransformContext, Type } from '@mikro-orm/core';
import { Password } from './password.runtimeType.js';

export class PasswordType extends Type<Password, string> {

convertToJSValue(value: string, platform: Platform): Password {
return Password.fromHash(value);
}

convertToDatabaseValue(value: Password, platform: Platform, context?: TransformContext): string {
return `${value}`;
}

compareAsType() {
return 'string';
}

}

现在,让我们修改我们的 fileNameonInitialMetadata 函数以识别这两个新文件并将密码与它们关联。

¥Now, let's modify our fileName and onInitialMetadata functions to recognize these two new files and associate the password with them.

src/mikro-orm.config.ts
// rest of imports

export default defineConfig({
// rest of the config
entityGenerator: {
fileName: (entityName) => {
switch (entityName) {
case 'Article':
case 'ArticleTag':
case 'Tag':
case 'Comment':
return `article/${entityName.toLowerCase()}.entity`;
case 'User':
return `user/${entityName.toLowerCase()}.entity`;
case 'Password':
return `user/password.runtimeType`;
case 'PasswordType':
return `user/password.type`;
default:
return `common/${entityName.toLowerCase()}.entity`;
}
},
onInitialMetadata: (metadata, platform) => {
const userEntity = metadata.find(meta => meta.className === 'User');
if (userEntity) {
const passwordProp = userEntity.properties.password;
passwordProp.hidden = true;
passwordProp.lazy = true;
passwordProp.type = 'PasswordType';
passwordProp.runtimeType = 'Password';
}

const articleEntity = metadata.find(meta => meta.className === 'Article');
if (articleEntity) {
const textProp = articleEntity.properties.text;
textProp.lazy = true;
}
},
// rest of entity generator config
}
});

再生后,你将能够像这样在 app.ts 中登录:

¥After regeneration, you would be able to do the login in app.ts like so:

src/app.ts
import { RequestContext, EntityData } from '@mikro-orm/core';
import { fastify } from 'fastify';
import { initORM } from './db.js';
import { User } from './modules/user/user.entity.js';
import { Password } from './modules/user/password.runtimeType.js';

const emptyHash = await Password.fromRaw('');

//...

// register new user
app.post('/sign-up', async request => {
const body = request.body as EntityData<User, true>;

if (!body.email || !body.fullName || !body.password) {
throw new Error('One of required fields is missing: email, fullName, password');
}

if ((await db.user.count({ email: body.email })) > 0) {
throw new Error('This email is already registered, maybe you want to sign in?');
}

const user = db.user.create({
fullName: body.fullName,
email: body.email,
password: await Password.fromRaw(body.password),
bio: body.bio ?? '',
});
await db.em.persist(user).flush();

// after flush, we have the `user.id` set
console.log(`User ${user.id} created`);

return user;
});

app.post('/sign-in', async request => {
const { email, password } = request.body as { email: string; password: string };
const err = new Error('Invalid combination of email and password');
if (password === '' || email === '') {
throw err;
}

const user = await db.user.findOne({ email }, {
populate: ['password'], // password is a lazy property, we need to populate it
})
// On failure, we return a pseudo user with an empty password hash.
// This approach minimizes the effectiveness of timing attacks
?? { password: emptyHash };

if (await user.password.verifyAndMaybeRehash(password)) {
await db.em.flush();
return user;//password is a hidden property, so it won't be returned, even on success.
}

throw err;
});

命名策略与元数据钩子

¥Naming strategy vs metadata hooks

命名策略可能看起来像是元数据钩子的更专业版本,但在更改名称之间也存在一个关键区别。使用命名策略,所有引用也会使用新名称进行更新。使用元数据钩子,更改 "original" 不会更新对它的任何引用。你可以自己更新引用,但这样做的效率不如仅仅覆盖命名策略。

¥It may look like the naming strategy is a more specialized version of the metadata hooks, but there is also one critical difference between changing the name from one vs the other. With a naming strategy, all references are also updated with the new name. With a metadata hook, changing the "original" does not update any references to it. You may update the references yourself, but doing so is less efficient than just overriding the naming strategy.

但除了效率之外,这个 "loophole" 实际上是有益的。我们可以使用映射的超类。为此,通过应用钩子重命名实体,然后创建一个具有原始名称的类来代替原始类。新的 "manual" 类应从生成的类继承。

¥But efficiency aside, this "loophole" can in fact be beneficial. We can use mapped superclasses. To do that, rename an entity via the application hooks, and then create a class with the original name, to take the place of the original class. The new "manual" class should inherit from the generated class.

这种方法可用于缓解实体生成器的任何缺点。最值得注意的是,创建构造函数和其他辅助方法很有用,因为生成器没有给你任何添加此类方法的方法。

¥This approach can be used to mitigate any shortcoming of the entity generator. Most notably, it is useful to create constructor functions and other helper methods, as the generator doesn’t give you any means to add such.

让我们以这种方式扩展文章。首先,让我们调整我们的配置。我们应该为我们的自定义实体类使用不同于 ".entity" 的后缀,这样我们就不会在重新生成时将其擦除。我们还需要将这些新后缀识别为实体。让我们使用后缀 ".customEntity"。我们还需要调整 fileName 以提供正确的路径,并将原始 "文章" 实体重命名为 onInitialMetadata 中的其他名称。假设我们将使这个项目成为一种惯例,在这些类名前加上 "_"。

¥Let's extend the article in this fashion. First, let's adjust our config. We should use a different suffix from ".entity" for our custom entity class, so that we don't wipe it upon regeneration. We'll also need to recognize these new suffixes as entities too. Let's use the suffix ".customEntity". We'll also need to adjust the fileName to give the proper paths, and do the rename of the original "Article" entity to something else in onInitialMetadata. Let's say we'll make it a convention for this project to prefix such class names with "_".

src/mikro-orm.config.ts
// rest of imports

export default defineConfig({
// rest of the config
entities: ['dist/**/*.customEntity.js', 'dist/**/*.entity.js'],
entitiesTs: ['src/**/*.customEntity.ts', 'src/**/*.entity.ts'],
// rest of the config
entityGenerator: {
fileName: (entityName) => {
switch (entityName) {
case '_Article':
return `article/article.entity`;
case 'Article':
return `article/article.customEntity`;
case 'ArticleTag':
case 'Tag':
case 'Comment':
return `article/${entityName.toLowerCase()}.entity`;
case 'User':
return `user/${entityName.toLowerCase()}.entity`;
case 'Password':
return `user/password.runtimeType`;
case 'PasswordType':
return `user/password.type`;
default:
return `common/${entityName.toLowerCase()}.entity`;
}
},
onInitialMetadata: (metadata, platform) => {
const userEntity = metadata.find(meta => meta.className === 'User');
if (userEntity) {
const passwordProp = userEntity.properties.password;
passwordProp.hidden = true;
passwordProp.lazy = true;
passwordProp.type = 'PasswordType';
passwordProp.runtimeType = 'Password';
}

const articleEntity = metadata.find(meta => meta.className === 'Article');
if (articleEntity) {
articleEntity.className = '_Article';
articleEntity.abstract = true;
const textProp = articleEntity.properties.text;
textProp.lazy = true;
}
},
// rest of entity generator config
}
});

并尝试重新生成实体...哎呀,你将使实体生成器崩溃。发生了什么?"文章" 实体涉及 M:N 关系,在用户端尝试连接它时,未找到它,这是不行的。现在我们需要引入 onProcessedMetadata,以便我们仅在 M:N 发现已经发生后才交换我们的类。

¥And try to regenerate the entities... Oops, you'll crash the entity generator. What happened? The "Article" entity is involved in a M:N relationship, and upon trying to connect it on the Users end, it was not found, which is not OK. This is now a case where we need to bring in onProcessedMetadata, so that we only swap our the class after the M:N discovery has already happened.

将配置更改为:

¥Change the config to:

src/mikro-orm.config.ts
// rest of imports

export default defineConfig({
// rest of the config
entities: ['dist/**/*.entity.js', 'dist/**/*.customEntity.js'],
entitiesTs: ['src/**/*.entity.ts', 'src/**/*.customEntity.ts'],
// rest of the config
entityGenerator: {
fileName: (entityName) => {
switch (entityName) {
case '_Article':
return `article/article.entity`;
case 'Article':
return `article/article.customEntity`;
case 'ArticleTag':
case 'Tag':
case 'Comment':
return `article/${entityName.toLowerCase()}.entity`;
case 'User':
return `user/${entityName.toLowerCase()}.entity`;
case 'Password':
return `user/password.runtimeType`;
case 'PasswordType':
return `user/password.type`;
default:
return `common/${entityName.toLowerCase()}.entity`;
}
},
onInitialMetadata: (metadata, platform) => {
const userEntity = metadata.find(meta => meta.className === 'User');
if (userEntity) {
const passwordProp = userEntity.properties.password;
passwordProp.hidden = true;
passwordProp.lazy = true;
passwordProp.type = 'PasswordType';
passwordProp.runtimeType = 'Password';
}

const articleEntity = metadata.find(meta => meta.className === 'Article');
if (articleEntity) {
const textProp = articleEntity.properties.text;
textProp.lazy = true;
}
},
onProcessedMetadata: (metadata, platform) => {
const articleEntity = metadata.find(meta => meta.className === 'Article');
if (articleEntity) {
articleEntity.className = '_Article';
articleEntity.abstract = true;
}
},
// rest of entity generator config
}
});

现在重新生成实体应该可以工作了。但是,代码目前无法构建。

¥Regenerating the entities should now work. However, the code doesn't build for now.

要解决此问题,首先,让我们添加实际的自定义实体类。我们将添加一个 slug 函数作为自定义构造函数的一部分。

¥To fix this, first, let's add the actual custom entity class. We'll add a slug function as part of the custom constructor.

src/modules/article/article.customEntity.ts
import { Entity, type Rel } from '@mikro-orm/core';
import { _Article } from './article.entity.js';
import { User } from '../user/user.entity.js';

function convertToSlug(text: string) {
return text
.toLowerCase()
.replace(/[^\w ]+/g, '')
.replace(/ +/g, '-');
}

@Entity({ tableName: 'articles' })
export class Article extends _Article {

constructor(title: string, text: string, author: Rel<User>) {
super();
this.title = title;
this.text = text;
this.author = author;
this.slug = convertToSlug(title);
this.description = this.text.substring(0, 999) + '…';
}

}

最后,让我们编辑 "db.ts" 以引用正确的导入。顶部应为:

¥And finally, let's edit "db.ts" to reference the proper import. The top should read:

src/db.ts
import {
type EntityManager,
type EntityRepository,
MikroORM,
type Options
} from "@mikro-orm/mysql";
import config from "./mikro-orm.config.js";
- import { Article } from "./modules/article/article.entity.js";
+ import { Article } from "./modules/article/article.customEntity.js";
import { Tag } from "./modules/article/tag.entity.js";
import { User } from "./modules/user/user.entity.js";
import { Comment } from "./modules/article/comment.entity.js";

但是,你可能希望在 "架构优先" 流程中避免使用这种方法,因为你的自定义类现在超出了实体生成器的范围。重命名数据库表需要额外步骤,即重命名自定义类中的 tableName 选项。更改构造函数中使用的任何属性都可能会破坏构建。换句话说,自定义类在访问实体类和属性时需要与应用代码的其余部分一样小心。

¥However, this approach is one you may want to avoid in a "schema first" flow, because your custom class is now outside the entity generator's reach. Renaming the database table requires the extra step of renaming the tableName option in the custom class. Changing any property used in the constructor may break builds. In other words, the custom class requires the same care as the rest of your application code does when it accesses entity classes and properties.

自定义类型(例如我们对密码所做的操作)在技术上也超出了实体生成器的范围。但是,它们是独立的 - 即使实体完全改变形状,它们仍然可以存在,并且实体在再生期间可能会交换自定义类型。

¥Custom types, like what we did for the password, are also technically outside the entity generator's reach. However, they’re self-contained - they can still exist even if the entity changes shape entirely, and the entity may have a custom type swapped out during a regeneration.

由于我们确实在代码库中引入了这一点,我们还应该解决由此产生的另一个问题。尝试再次重新生成实体。你会注意到现在有一​​个错误。发生错误的原因是 MikroORM 试图导入 ".customEntity" 文件,但如果生成的实体尚未存在,则该文件无法运行。要解决此问题,我们需要在重新生成之前重命名我们的覆盖(以便 MikroORM 在实体生成期间无法识别它们),并在重新生成后恢复它们的名称。

¥Since we did introduce this in our code base though, we should also address another problem this creates. Try to regenerate the entities again. You will notice there's now an error. The error happens because MikroORM is trying to import the ".customEntity" files, but that file can't run without the generated entity already being present. To fix the problem, we'll need to rename our overrides before regeneration (so that MikroORM doesn't recognize them during entity generation), and restore their names after regeneration.

为此,请安装重命名器:

¥To do this, install renamer:

npm install --save-dev renamer

并将 regen 脚本调整为:

¥and adjust the regen script to:

package.json
    "regen": "rimraf -g ./src/**/*.entity.ts && renamer --silent --find /\\.customEntity\\.ts$/ --replace .customEntity.ts.bak ./src/** && mikro-orm-esm generate-entities --save && renamer --silent --find /\\.customEntity\\.ts\\.bak$/ --replace .customEntity.ts ./src/**",

添加虚拟属性

¥Adding virtual properties

让我们继续重新实现更多 "代码优先" 指南的应用。我们将以类似于 "代码优先" 指南的方式将 JWT 身份验证添加到我们的端点 - 通过保存用户 JWT 的虚拟属性。

¥Let's continue re-implementing more of the "code first" guide's application. We'll add JWT authentication to our endpoints, in a similar fashion to the way the "code first" guide does it - via a virtual property that holds the user's JWT.

首先,让我们添加属性。在 onInitialMetadata 内部,对于用户实体,我们需要使用表示新属性的对象调用 addProperty() 方法。实体生成器经过优化,可与预先填充了整个数据库元数据的对象一起使用,并且很少检查自定义属性。所以为了确保生成器不会崩溃,我们应该包含与真实列相同类型的信息,但将添加的 "persist" 选项设置为 "false"。在我们的例子中,我们需要一个映射到常规字符串类型的可空 varchar(255) 列。

¥First, let's add the property. Inside onInitialMetadata, for the user entity, we need to call the addProperty() method with an object representing the new property. The entity generator is optimized to work with objects that are pre-filled with the entire database metadata, and does very few checks for custom properties. So to ensure the generator doesn't crash, we should include the same type of information as if this was a real column, but with the added "persist" option set to "false". In our case, a nullable varchar(255) column mapped to a regular string type is what we need.

src/mikro-orm.config.ts
      const userEntity = metadata.find(meta => meta.className === 'User');
if (userEntity) {
userEntity.addProperty({
persist: false,
name: 'token',
nullable: true,
default: null,
defaultRaw: 'null',
fieldNames: [platform.getConfig().getNamingStrategy().propertyToColumnName('token')],
columnTypes: ['varchar(255)'],
type: 'string',
runtimeType: 'string',
});
const passwordProp = userEntity.properties.password;
passwordProp.hidden = true;
passwordProp.lazy = true;
passwordProp.type = 'PasswordType';
passwordProp.runtimeType = 'Password';
}

现在重新生成实体将在实体中添加此属性。你甚至不需要在这里执行迁移,因为不涉及 "real" 数据库更改。从这里开始,我们仍然需要做我们在 "代码优先" 指南中必须做的事情。

¥Regenerating the entities now will add this property in the entity. You don't even need to perform a migration here, since there is no "real" database change involved. From here, we still need to do the same things we had to do in the "code first" guide.

安装 fastify JWT:

¥Install fastify JWT:

npm install @fastify/jwt

然后在 app.ts 顶部注册它,并在 ORM 钩子之后添加 jwt 验证请求钩子(以启用 JWT 验证以使用 DB):

¥Then register it at the top of app.ts, and add jwt verify request hook after the ORM hook (to enable JWT verification to use the DB):

src/app.ts
import fastifyJWT from '@fastify/jwt';

// ...

const app = fastify();

// register JWT plugin
app.register(fastifyJWT, {
secret: process.env.JWT_SECRET ?? '12345678', // fallback for testing
});

// register request context hook
app.addHook('onRequest', (request, reply, done) => {
RequestContext.create(db.em, done);
});

// register auth hook after the ORM one to use the context
app.addHook('onRequest', async (request) => {
try {
const ret = await request.jwtVerify<{ id: number }>();
request.user = await db.user.findOneOrFail(ret.id);
} catch (e) {
app.log.error(e);
// ignore token errors, we validate the request.user exists only where needed
}
});

// ...

还将 JWT 签名添加到登录和注册端点,以使客户端能够看到签名的 JWT:

¥And also add JWT signing to the login and register endpoints, to enable the client to see the signed JWT:

src/app.ts
// ...
app.post('/sign-up', async request => {
// ...
await db.em.persist(user).flush();

// after flush, we have the `user.id` set
console.log(`User ${user.id} created`);

user.token = app.jwt.sign({ id: user.id });

return user;
});

app.post('/sign-in', async request => {
// ...
const user = await db.user.findOne({ email }, {
populate: ['password'], // password is a lazy property, we need to populate it
})
// On failure, we return a pseudo user with an empty password hash.
// This approach minimizes the effectiveness of timing attacks
?? { password: emptyHash, id: 0, token: undefined };

if (await user.password.verifyAndMaybeRehash(password)) {
await db.em.flush();
user.token = app.jwt.sign({ id: user.id });
return user;
}

throw err;
});

现在让我们添加一个 "/profile" 端点,向我们显示当前登录的用户:

¥And let's also now add a "/profile" endpoint, to show us the user currently logged in:

src/app.ts
app.get('/profile', async request => {
if (!request.user) {
throw new Error('Please provide your token via Authorization header');
}

return request.user as User;
});

⛳ 检查点 2

¥⛳ Checkpoint 2

我们的应用现在具有 JWT 身份验证和配置文件视图。同时,我们还进行了完整的 DB 更改周期。在我们转向更多实体生成功能之前,让我们进行一些重构以使大型 app.ts 文件更易于管理,并添加一些测试。这将使我们的应用的最终版本及其附加功能更易于推断。

¥Our application now has JWT authentication and profile view. Meanwhile, we also did a full DB change cycle. Before we move onto more entity generation features, let's do some refactoring to make the big app.ts file more manageable, and add some tests. This will make the final version of our application, complete with its additional features, easier to reason about.

如果你希望在此阶段验证应用的 "manually" 版本,则需要使用 curl、Postman 或其他类似工具发出 POST 请求。或者,从浏览器控制台或单独的 Node REPL 使用 fetch()。

¥If you wanted to "manually" verify the application at this stage, you would need to issue the POST requests using curl, Postman or other similar tools. Or alternatively, use fetch() from a browser console or a separate Node REPL.

喜欢注册:

¥Like to register:

await fetch(new Request('/sign-up', {
method: 'POST',
headers: {
'Content-Type': 'application/json; charset=utf-8'
},
body: JSON.stringify({
fullName: 'test',
email: 'test@example.com',
password: '1234'
})
}));

然后登录:

¥and then to login:

await fetch(new Request('/sign-in', {
method: 'POST',
headers: {
'Content-Type': 'application/json; charset=utf-8'
},
body: JSON.stringify({
email: 'test@example.com',
password: '1234'
})
}));

应用重构

¥Application refactor

将路由移入模块

¥Move routes into the modules

让我们先将 app.ts 中的路由移动到适当的模块文件夹中,然后将它们与 app.ts 重新连接。

¥Let's first move the routes in app.ts into the appropriate module folders, and connect them back with app.ts.

对于每个 *.routes.ts 文件,我们将导出一个 fastify 异步插件,并注册我们的路由。每个路由文件都将使用前缀导入,以允许他们定义他们喜欢的任何路由,而不会与其他 *.routes.ts 文件冲突。

¥For each *.routes.ts file, we'll export a fastify async plugin, and register our routes. Each route file will be imported with a prefix, to allow them to define whatever routes they like, without conflicting with other *.routes.ts files.

我们用于 *.routes.ts 文件的基本样板:

¥Our basic boilerplate for *.routes.ts files:

import { type FastifyPluginAsync } from 'fastify';
import { type Services } from '../../db.js';

export default (async (app, { db }) => {
//routes here
}) as FastifyPluginAsync<{ db: Services }>;

具体来说:

¥And specifically:

src/modules/user/user.routes.ts
import { type FastifyPluginAsync } from 'fastify';
import { type Services } from '../../db.js';
import { User } from './user.entity.js';
import { Password } from './password.runtimeType.js';
import { type EntityData } from '@mikro-orm/mysql';

const emptyHash = await Password.fromRaw('');

export default (async (app, { db }) => {

// register new user
app.post('/sign-up', async request => {
const body = request.body as EntityData<User, true>;

if (!body.email || !body.fullName || !body.password) {
throw new Error('One of required fields is missing: email, fullName, password');
}

if ((await db.user.count({ email: body.email })) > 0) {
throw new Error('This email is already registered, maybe you want to sign in?');
}

const user = db.user.create({
fullName: body.fullName,
email: body.email,
password: await Password.fromRaw(body.password),
bio: body.bio ?? '',
});
await db.em.persist(user).flush();

// after flush, we have the `user.id` set
console.log(`User ${user.id} created`);

user.token = app.jwt.sign({ id: user.id });

return user;
});

app.post('/sign-in', async request => {
const { email, password } = request.body as { email: string; password: string };
const err = new Error('Invalid combination of email and password');
if (password === '' || email === '') {
throw err;
}

const user = await db.user.findOne({ email }, {
populate: ['password'], // password is a lazy property, we need to populate it
})
// On failure, we return a pseudo user with an empty password hash.
// This approach minimizes the effectiveness of timing attacks
?? { password: emptyHash, id: 0, token: undefined };

if (await user.password.verifyAndMaybeRehash(password)) {
await db.em.flush();
user.token = app.jwt.sign({ id: user.id });
return user;//password is a hidden property, so it won't be returned, even on success.
}

throw err;
});

app.get('/profile', async request => {
if (!request.user) {
throw new Error('Please provide your token via Authorization header');
}

return request.user as User;
});
}) as FastifyPluginAsync<{ db: Services }>;

并且

¥and also

src/modules/article/article.routes.ts
import { type FastifyPluginAsync } from 'fastify';
import { type Services } from '../../db.js';

export default (async (app, { db }) => {
app.get('/', async (request) => {
const { limit, offset } = request.query as {
limit?: number;
offset?: number;
};
const [items, total] = await db.article.findAndCount(
{},
{
limit,
offset,
}
);

return { items, total };
});
}) as FastifyPluginAsync<{ db: Services }>;

让我们也移出钩子。这些需要我们用 fastify-plugin 封装它们,因为我们希望这些钩子跨越所有前缀。

¥and let's also move out the hooks too. These would require we wrap them with fastify-plugin instead, since we want these hooks across all prefixes.

src/modules/common/hooks.ts
import { fastifyPlugin } from 'fastify-plugin';
import { type Services } from '../../db.js';
import { RequestContext } from '@mikro-orm/mysql';

export default fastifyPlugin<{db: Services}>(async (app, { db }) => {

// register request context hook
app.addHook('onRequest', (request, reply, done) => {
RequestContext.create(db.em, done);
});

// register auth hook after the ORM one to use the context
app.addHook('onRequest', async (request) => {
try {
const ret = await request.jwtVerify<{ id: number }>();
request.user = await db.user.findOneOrFail(ret.id);
} catch (e) {
app.log.error(e);
// ignore token errors, we validate the request.user exists only where needed
}
});

// shut down the connection when closing the app
app.addHook('onClose', async () => {
await db.orm.close();
});

});

这使得我们的 app.ts 如下所示:

¥Which leaves our app.ts like:

src/app.ts
import { fastify } from 'fastify';
import fastifyJWT from '@fastify/jwt';
import { initORM } from './db.js';
import hooks from './modules/common/hooks.js';
import userRoutes from './modules/user/user.routes.js';
import articleRoutes from './modules/article/article.routes.js';

export async function bootstrap(port = 3001, migrate = true) {
const db = await initORM({
ensureDatabase: { create: false },
});

if (migrate) {
// sync the schema
await db.orm.migrator.up();
}

const app = fastify();

// register JWT plugin
app.register(fastifyJWT, {
secret: process.env.JWT_SECRET ?? '12345678', // fallback for testing
});

await app.register(hooks, { db });

// register routes here
app.register(articleRoutes, { db, prefix: 'article' });
app.register(userRoutes, { db, prefix: 'user' });

const url = await app.listen({ port });

return { app, url };
}

这要好得多。我们的 URL 端点现在是 "/article"、"/user/sign-up"、"/user/sign-in"、"/user/profile"。

¥which is much nicer. Our URL endpoints are now "/article", "/user/sign-up", "/user/sign-in", "/user/profile".

使配置环境依赖

¥Making the config env dependent

我们之前提到过,如果你需要特定于工具的配置,则可以拆分配置文件。但是,更一般地说,你至少需要一个 dev vs prod 配置,其中 "dev" 基本上是 "运行 MikroORM CLI 时",而 "prod" 基本上是 "应用运行时"。

¥We mentioned earlier that you could split your config files if you need tool-specific configs. However, more generally, you will at least want a dev vs prod config, with "dev" basically being "when running the MikroORM CLI", while "prod" would basically be "when the application is running".

我们可以根据参数检测我们是否在 MikroORM CLI 中运行,并采取相应的措施。

¥We can detect whether we're running in the MikroORM CLI based on the arguments, and act accordingly.

虽然我们不需要特定于工具的配置,但实体生成有一个烦人的事情,我们可以通过专门针对实体生成器的配置调整来解决。由于在引入映射的超类后我们必须对实体进行重命名,因此你可能已经看到你的 IDE 无法识别映射的超类。它会一直保持这种状态,直到你重新启动 IDE 的 typescript 服务器,或者剪切并粘贴映射的超类引用以强制重新检查。我们可以通过调整配置以完全不显示实体来避免这种烦恼,但仅在从 MikroORM CLI 运行 regenerate-entities 命令时才显示。

¥And although we don't require a tool-specific config, there is one annoying thing about entity generation that we can tackle with a config adjustment specifically to the entity generator. Because of the renames that we have to do for our entity regeneration after our mapped superclass was introduced, you may have seen your IDE fail to recognize the mapped superclass. And it stays like that until you restart your IDE's typescript server, or cut and paste the mapped superclass reference to force a re-check. We can avoid this annoyance by adjusting our config to not feature the entities at all, but only when running the regenerate-entities command from the MikroORM CLI.

src/mikro-orm.config.ts
import {
defineConfig,
type MikroORMOptions,
} from '@mikro-orm/mysql';
import { UnderscoreNamingStrategy } from '@mikro-orm/core';
import { Migrator } from '@mikro-orm/migrations';
import pluralize from 'pluralize';
import { join } from 'node:path';

const isInMikroOrmCli = process.argv[1]?.endsWith(join('@mikro-orm', 'cli', 'esm')) ?? false;
const isRunningGenerateEntities = isInMikroOrmCli && process.argv[2] === 'generate-entities';

const mikroOrmExtensions: MikroORMOptions['extensions'] = [Migrator];
if (isInMikroOrmCli) {
mikroOrmExtensions.push((await import('@mikro-orm/entity-generator')).EntityGenerator);
}

export default defineConfig({
extensions: mikroOrmExtensions,
multipleStatements: isInMikroOrmCli,
discovery: {
warnWhenNoEntities: !isInMikroOrmCli,
},
entities: isRunningGenerateEntities ? [] : ['dist/**/*.customEntity.js', 'dist/**/*.entity.js'],
entitiesTs: isRunningGenerateEntities ? [] : ['src/**/*.customEntity.ts', 'src/**/*.entity.ts'],
// rest of the config
});

有了这些,我们可以恢复之前对实体生成过程所做的更改,即

¥And with that in place, we can revert the changes we made before to the entity generation process, i.e.

package.json
-  "regen": "rimraf -g ./src/**/*.entity.ts && renamer --silent --find /\\.customEntity\\.ts$/ --replace .customEntity.ts.bak ./src/** && mikro-orm-esm generate-entities --save && renamer --silent --find /\\.customEntity\\.ts\\.bak$/ --replace .customEntity.ts ./src/**",
+ "regen": "rimraf -g ./src/**/*.entity.ts && mikro-orm-esm generate-entities --save",

and

npm uninstall renamer

为了安全起见,我们应该进一步使迁移在启用 multipleStatements 的单独连接中运行,而对其他所有连接禁用它。

¥We should further make it so that migrations run in a separate connection where multipleStatements is enabled, while it is disabled for everything else, for the sake of security.

让我们将 app.ts 设置为:

¥Let's make app.ts be like:

src/app.ts
import { fastify } from 'fastify';
import fastifyJWT from '@fastify/jwt';
import { initORM } from './db.js';
import hooks from './modules/common/hooks.js';
import userRoutes from './modules/user/user.routes.js';
import articleRoutes from './modules/article/article.routes.js';

export async function bootstrap(port = 3001, migrate = true) {
const db = await initORM(migrate ? { multipleStatements: true, ensureDatabase: { create: false } } : {});

if (migrate) {
// sync the schema
await db.orm.migrator.up();
await db.orm.reconnect({ multipleStatements: false });
}

const app = fastify();

// register JWT plugin
await app.register(fastifyJWT, {
secret: process.env.JWT_SECRET ?? '12345678', // fallback for testing
});

await app.register(hooks, { db });

// register routes here
app.register(articleRoutes, { db, prefix: 'article' });
app.register(userRoutes, { db, prefix: 'user' });

const url = await app.listen({ port });

return { app, url };
}

测试端点

¥Testing the endpoints

到目前为止,当我们检查生成的应用时,我们一直在这样做 "manually"。让我们添加一些测试,以便我们可以在进行进一步的更改和添加时反复检查一切是否正常。

¥So far, when we've checked the resulting app, we've been doing so "manually". Let's add some tests, so that we can repeatedly check that everything is working as we make further changes and additions.

在 "代码优先" 方法中,你可以让模式生成器根据你的实体定义为你创建测试数据库的模式。虽然你可以在 "架构优先" 方法中执行相同的操作,但如果你的数据库模式足够复杂,你最终可能会遇到这样的情况:模式生成器会产生与你的真实模式略有不同的东西(这可能是因为源自实体生成器的错误没有生成正确/完整的代码,或者因为你的模式包含 MikroORM 通常尚未跟踪的功能,如触发器和例程),这反过来会使你的测试结果偏离,特别是当你的应用依赖上述差异时。避免此类问题的最佳方法是在测试套件开始时运行迁移。如果你有太多迁移,你可以考虑偶尔使用数据库引擎原生的工具(例如 MySQL 中的 "mysqldump")+ MikroORM 迁移表的数据转储进行数据库 DDL 转储。然后在运行该转储后创建的其余迁移之前执行这些操作。

¥In a "code first" approach, you can let the schema generator create the test database's schema for you, based on your entity definitions. While you could do the same in a "schema first" approach, if your database schema is sufficiently complex, you may end up in a situation where the schema generator will produce something slightly different from your true schema (which may be because of bugs originating in the entity generator not producing the correct/complete code, or because your schema includes features that MikroORM does not track yet in general, like triggers and routines), which will in turn make your test results be off, particularly when said differences are being relied on by your application. The best way to avoid issues like this is to simply run your migrations at the start of the test suite. If you have too many migrations, you may consider occasionally doing a database DDL dump using a tool native to your database engine (e.g. "mysqldump" in the case of MySQL) + a data dump of the MikroORM migrations table. Then execute these before running the rest of the migrations that were created after that dump.

为了使本指南简单易懂,我们将只运行迁移。

¥To keep this guide simple, we will just run the migrations.

让我们创建一个测试实用程序来初始化我们的测试数据库:

¥Let's create a test util to init our test database:

test/utils.ts
import { bootstrap } from '../src/app.js';
import { initORM } from '../src/db.js';

export async function initTestApp(port: number) {
// this will create all the ORM services and cache them
await initORM({
// no need for debug information, it would only pollute the logs
debug: false,
// we will use a dynamic name, based on port. This way we can easily parallelize our tests
dbName: `blog_test_${port}`,
// create the schema so we can use the database
ensureDatabase: { create: false },
// required for the migrations
multipleStatements: true,
});

const { app } = await bootstrap(port);

return app;
}

并为我们的 "/article" 端点添加测试:

¥and add a test for our "/article" endpoint:

test/article.test.ts
import { afterAll, beforeAll, expect, test } from 'vitest';
import { FastifyInstance } from 'fastify';
import { initTestApp } from './utils.js';

let app: FastifyInstance;

beforeAll(async () => {
// we use different ports to allow parallel testing
app = await initTestApp(30001);
});

afterAll(async () => {
// we close only the fastify app - it will close the database connection via onClose hook automatically
await app.close();
});

test('list all articles', async () => {
// mimic the http request via `app.inject()`
const res = await app.inject({
method: 'get',
url: '/article',
});

// assert it was successful response
expect(res.statusCode).toBe(200);

// with expected shape
expect(res.json()).toMatchObject({
items: [],
total: 0,
});
});

如果你之前已经阅读过 "代码优先" 指南,你就会知道这会中断,并出现以下错误消息:

¥If you've previously gone through the "code first" guide, you know this breaks with the error message like

FAIL  test/article.test.ts [ test/article.test.ts ]
TypeError: Unknown file extension ".ts" for /blog-api/src/modules/article/article.entity.ts

的内容。为了修复它,我们需要调整配置以添加动态导入:

¥and to fix it, we need to adjust the config to add a dynamic import:

test/utils.ts
import { bootstrap } from '../src/app.js';
import { initORM } from '../src/db.js';

export async function initTestApp(port: number) {
// this will create all the ORM services and cache them
await initORM({
// no need for debug information, it would only pollute the logs
debug: false,
// we will use a dynamic name, based on port. This way we can easily parallelize our tests
dbName: `blog_test_${port}`,
// create the schema so we can use the database
ensureDatabase: { create: false },
// required for the migrations
multipleStatements: true,
+ // required for vitest
+ dynamicImportProvider: id => import(id),
});

const { app } = await bootstrap(port);

return app;
}

现在,尝试再次运行它...你应该看到不同的错误:

¥And now, trying to run it again... you should see a different error:

Error: Please provide either 'type' or 'entity' attribute in User.id. If you are using decorators, ensure you have 'emitDecoratorMetadata' enabled in your tsconfig.json.

但是我们确实在 tsconfig.json 中添加了 emitDecoratorMetadata,对吗?是的,但 vitest 使用 ESBuild 来转译源代码,而 ESBuild 不支持开箱即用。这个问题有几种解决方案。我们可以

¥But we did add emitDecoratorMetadata in our tsconfig.json, right? Yes, but vitest uses ESBuild to transpile the sources, and ESBuild doesn’t support this out of the box. There are several solutions to this problem. We may either

  1. 使用 @mikro-orm/reflection 以不依赖 emitDecoratorMetadata 的不同方式分析源。

    ¥Use @mikro-orm/reflection to analyze the sources in a different fashion that doesn't rely on emitDecoratorMetadata.

  2. 将 ESBuild 换成 SWC 和 配置 SWC 以支持装饰器

    ¥Swap out ESBuild for SWC, and configure SWC to support decorators.

  3. 安装 @anatine/esbuild-decorators 并将其添加到 vitest 配置中。

    ¥Install @anatine/esbuild-decorators and add it to the vitest config.

  4. 调整实体生成器以始终输出 "type" 属性,从而绕过首先推断类型的需要。

    ¥Adjust the entity generator to always output the "type" property, thus bypassing the need to infer the type in the first place.

选项 1 是 "代码优先" 指南所做的,如果你手动编写实体定义,这是一个很好的解决方案。如果你还需要 emitDecoratorMetadata 用于其他目的,则可以使用选项 2 和 3 的另一种选择。对于本指南,我们将选择选项 4,因为它最容易做到。

¥Option 1 is what the "code first" guide does, and that is a great solution if you are writing the entity definitions manually. Options 2 and 3 are a different alternative you may go for if you need emitDecoratorMetadata for other purposes as well. For this guide, we'll go with option 4, because it is the easiest to do.

添加到你的配置

¥Add to your config

src/mikro-orm.config.ts
  entityGenerator: {
scalarTypeInDecorator: true,
// rest of entity generator config
}

并重新生成实体。你现在可以运行测试而不会出现错误。此时你也可以从 tsconfig.json 中删除 emitDecoratorMetadata,除非你需要它用于另一个库。

¥and regenerate the entities. You can now run the test without an error. You may also remove emitDecoratorMetadata from tsconfig.json at this point, unless you need it for another library.

现在我们已经让文章测试正常运行,让我们也为用户端点添加测试。我们将注册一个用户,尝试与他们一起登录,查看他们的个人资料,并在最后删除用户,以保持测试可重复。

¥Now that we have the article test working, let's also add tests for the user endpoint. We'll register a user, try to log in with them, see their profile, and remove the user at the end, to keep the test repeatable.

test/user.test.ts
import { FastifyInstance } from 'fastify';
import { afterAll, beforeAll, expect, test } from 'vitest';
import { initTestApp } from './utils.js';
import { EntityData } from '@mikro-orm/core';
import { User } from '../src/modules/user/user.entity.js';
import { initORM } from '../src/db.js';

let app: FastifyInstance;

beforeAll(async () => {
// we use different ports to allow parallel testing
app = await initTestApp(30002);
});

afterAll(async () => {
const db = await initORM();
try {
const fork = db.em.fork();
await fork.removeAndFlush(await fork.findOneOrFail(User, { email: 'foo@bar.com' }));
} catch (e: unknown) {
console.error(e);
}
// we close only the fastify app - it will close the database connection via onClose hook automatically
await app.close();
});

test('full flow', async () => {
const res1 = await app.inject({
method: 'post',
url: '/user/sign-up',
payload: {
fullName: 'Foo Bar',
email: 'foo@bar.com',
password: 'password123',
},
});

expect(res1.statusCode).toBe(200);
expect(res1.json()).toMatchObject({
fullName: 'Foo Bar',
});

const res1dup = await app.inject({
method: 'post',
url: '/user/sign-up',
payload: {
fullName: 'Foo Bar',
email: 'foo@bar.com',
password: 'password123',
},
});

expect(res1dup.statusCode).toBe(500);
expect(res1dup.json()).toMatchObject({
message: 'This email is already registered, maybe you want to sign in?',
});

const res2 = await app.inject({
method: 'post',
url: '/user/sign-in',
payload: {
email: 'foo@bar.com',
password: 'password123',
},
});

expect(res2.statusCode).toBe(200);
expect(res2.json()).toMatchObject({
fullName: 'Foo Bar',
});

const res3 = await app.inject({
method: 'post',
url: '/user/sign-in',
payload: {
email: 'foo@bar.com',
password: 'password456',
},
});

expect(res3.statusCode).toBe(500);
expect(res3.json()).toMatchObject({ message: 'Invalid combination of email and password' });

const res4 = await app.inject({
method: 'get',
url: '/user/profile',
headers: {
'Authorization': `Bearer ${res2.json().token}`
}
});
expect(res4.statusCode).toBe(200);
expect(res2.json()).toMatchObject(res4.json());
});

此测试也应无错误地通过。如果一切顺利,我们可以继续进行一些应用重构。

¥This test should also pass with no errors. If all is good, we can move on to a few more application refactorings.

添加更好的错误处理

¥Adding better error handling

让我们调整应用,使其返回适当的状态代码,而不仅仅是在任何错误上显示状态代码 500。添加专用的错误类文件。按照我们自己的惯例,假设我们将自定义错误类放在带有 ".error.ts" 后缀的文件中。这没有技术原因。纯粹是组织性的。

¥Let's adjust the application so that it returns appropriate status codes, rather than just show status code 500 on any error. Add a dedicated error class file. As our own convention, let's say we'll be placing custom error classes in files with ".error.ts" suffix. There is no technical reason for this. It's purely organizational.

src/modules/common/auth.error.ts
export class AuthError extends Error {}

然后让我们为此错误返回状态 401。将此处理程序添加到 hooks.ts

¥And then let's make it so that we return status 401 for this error. Add this handler to hooks.ts:

src/modules/common/hooks.ts
import { fastifyPlugin } from 'fastify-plugin';
import { type Services } from '../../db.js';
import { NotFoundError, RequestContext } from '@mikro-orm/mysql';
import { AuthError } from './auth.error.js';

export default fastifyPlugin<{db: Services}>(async (app, { db }) => {

// rest of the code

// register global error handler to process 404 errors from `findOneOrFail` calls
app.setErrorHandler((error, request, reply) => {
if (error instanceof AuthError) {
return reply.status(401).send(error);
}

// we also handle not found errors automatically
// `NotFoundError` is an error thrown by the ORM via `em.findOneOrFail()` method
if (error instanceof NotFoundError) {
return reply.status(404).send(error);
}

app.log.error(error);
reply.status(500).send(error);
});
});

最后,让我们在身份验证失败时实际抛出该错误。修改 user.routes.ts

¥And finally, let's actually throw that error on auth failures. Modify user.routes.ts:

src/modules/user.routes.ts
...
import { type EntityData } from '@mikro-orm/mysql';
+ import { AuthError } from '../common/auth.error.js';
...
app.post('/sign-in', async request => {
const { email, password } = request.body as { email: string; password: string };
- const err = new Error('Invalid combination of email and password');
+ const err = new AuthError('Invalid combination of email and password');
...
app.get('/profile', async request => {
if (!request.user) {
- throw new Error('Please provide your token via Authorization header');
+ throw new AuthError('Please provide your token via Authorization header');
}
...

如果你现在尝试重新运行测试,你应该会在状态代码检查中看到测试失败。让我们也修改测试,以反映我们新的预期行为:

¥If you try to re-run the tests now, you should see a test failure at the status code check. Let's modify the test too, to reflect our new intended behavior:

test/user.test.ts
-  expect(res3.statusCode).toBe(500);
+ expect(res3.statusCode).toBe(401);

现在,测试再次通过。

¥And now, the test passes again.

添加存储库

¥Adding repositories

让我们将一些用户逻辑移动到自定义存储库中。首先,让我们定义存储库。我们将包含一种方法来检查电子邮件是否存在以及登录用户:

¥Let's move some of the user logic into a custom repository. First, let's define the repository. We'll include a method to check if an email exists, and to login users:

src/modules/user/user.repository.ts
import { EntityRepository } from '@mikro-orm/mysql';
import { User } from './user.entity.js';
import { AuthError } from '../common/auth.error.js';
import { Password } from './password.runtimeType.js';

const emptyHash = await Password.fromRaw('');

export class UserRepository extends EntityRepository<User> {

async exists(email: string) {
return (await this.count({ email })) > 0;
}

async login(email: string, password: string) {
const err = new AuthError('Invalid combination of email and password');
if (password === '' || email === '') {
throw err;
}

const user = await this.findOne({ email }, {
populate: ['password'], // password is a lazy property, we need to populate it
})
// On failure, we return a pseudo user with an empty password hash.
// This approach minimizes the effectiveness of timing attacks
?? { password: emptyHash, id: 0, token: undefined };

if (await user.password.verifyAndMaybeRehash(password)) {
await this.getEntityManager().flush();
return user;//password is a hidden property, so it won't be returned, even on success.
}

throw err;
}
}

接下来,我们需要将此存储库与实体端的用户实体关联起来。要以 "架构优先" 方法执行此操作,你需要在扩展钩子中填写 repositoryClass 属性。

¥Next, we'll need to associate this repository with the user entity on the entity's side. To do that in a "schema first" approach, you need to fill in the repositoryClass property in extension hooks.

src/mikro-orm.config.ts
+          case 'UserRepository':
+ return `user/user.repository`;
case 'User':
return `user/${entityName.toLowerCase()}.entity`;
...
const userEntity = metadata.find(meta => meta.className === 'User');
if (userEntity) {
+ userEntity.repositoryClass = 'UserRepository';
...

并重新生成实体。

¥and regenerate the entities.

实体的选项现在将包括存储库类的工厂以及 TypeScript 提示。要在可用时使用自定义存储库类,并在不可用时回退到默认存储库类,我们应该修改数据库封装器以使用 GetRepository 类型,如下所示:

¥The entity's options will now include a factory for the repository class, as well as a TypeScript hint. To use the custom repository class when available, and fallback to the default ones when not, we should modify our database wrapper to use the GetRepository type, like so:

src/db.ts
import {
type EntityManager,
type EntityRepository,
+ type GetRepository,
MikroORM,
type Options
} from "@mikro-orm/mysql";
...

export interface Services {
orm: MikroORM;
em: EntityManager;
- user: EntityRepository<User>;
- article: EntityRepository<Article>;
- tag: EntityRepository<Tag>;
- comment: EntityRepository<Comment>;
+ user: GetRepository<User, EntityRepository<User>>;
+ article: GetRepository<Article, EntityRepository<Article>>;
+ tag: GetRepository<Tag, EntityRepository<Tag>>;
+ comment: GetRepository<Comment, EntityRepository<Comment>>;
}
...

GetRepository 类型的第二个类型参数是后备类,以防实体未定义类型提示。该后备应该与配置中定义为默认存储库的类匹配。我们只是指定了这一点。

¥The second type argument to the GetRepository type is a fallback class, in case the entity does not define a type hint. That fallback should match the class defined in the config as a default repository. We're using MikroORM's default, so we're just specifying that.

现在我们已经定义并提供了存储库,我们可以在 user.routes.ts 中使用它,如下所示:

¥Now that we have the repository defined and available, we can use it in user.routes.ts, like so:

src/modules/user/user.routes.ts
...
-const emptyHash = await Password.fromRaw('');
...
app.post('/sign-up', async request => {
const body = request.body as EntityData<User, true>;

if (!body.email || !body.fullName || !body.password) {
throw new Error('One of required fields is missing: email, fullName, password');
}

- if ((await db.user.count({ email: body.email })) > 0) {
+ if (await db.user.exists(body.email)) {
throw new Error('This email is already registered, maybe you want to sign in?');
}
...
app.post('/sign-in', async request => {
const { email, password } = request.body as { email: string; password: string };
- const err = new AuthError('Invalid combination of email and password');
- if (password === '' || email === '') {
- throw err;
- }
-
- const user = await db.user.findOne({ email }, {
- populate: ['password'], // password is a lazy property, we need to populate it
- })
- // On failure, we return a pseudo user with an empty password hash.
- // This approach minimizes the effectiveness of timing attacks
- ?? { password: emptyHash, id: 0, token: undefined };
-
- if (await user.password.verifyAndMaybeRehash(password)) {
- await db.em.flush();
- user.token = app.jwt.sign({ id: user.id });
- return user;//password is a hidden property, so it won't be returned, even on success.
- }
-
- throw err;
+ const user = await db.user.login(email, password);
+ user.token = app.jwt.sign({ id: user.id });
+ return user;
});

通过 Zod 添加输入运行时验证

¥Adding input runtime validation via Zod

每次我们对 request 中的某些内容执行 as 时,我们实际上都在告诉 TypeScript 我们知道用户输入的形状。实际上,没有什么可以阻止用户提交不符合该形状的内容,甚至首先不输入 JSON。我们应该在将其传递到我们的逻辑之前验证所有用户输入(在我们的例子中意味着来自 "request" 的任何内容)。一个好方法是使用 Zod。我们来添加这样的验证。

¥Every time we do as on something from request, we are effectively telling TypeScript we know what the user input will be shaped like. In reality, nothing is stopping the user from submitting something not conforming to that shape, or not even inputting JSON in the first place. We should validate all user input (which in our case means anything from "request") before passing it further along in our logic. One good way to do that is using Zod. Let's add such validation.

安装 Zod:

¥Install Zod:

npm install zod

首先,让我们处理登录端点。

¥First off, let's deal with the sign-in endpoint.

src/modules/user.routes.ts
...
+import { z } from 'zod';
+
...
+ const signInPayload = z.object({
+ email: z.string().min(1),
+ password: z.string().min(1),
+ });
+
app.post('/sign-in', async request => {
- const { email, password } = request.body as { email: string; password: string };
+ const { email, password } = signInPayload.parse(request.body);
...

Zod 包含一个用于验证电子邮件语法有效性的验证器,但我们在登录期间不需要它。只要电子邮件不为空,我们就可以搜索它。如果电子邮件无效,则它一开始就不会存在于数据库中。我们将在注册期间确保这一点。现在让我们开始吧,在执行此操作的同时,让我们在验证后自动散列密码,以简化对 create() 方法的调用:

¥Zod includes a validator for syntax validity of email, but we don't need it during sign-in. As long as the email is not empty, we can search it. If the email is not valid, it won't exist in the database to begin with. We'll make sure of that during the sign-up. Let's do that now, and while we're at it, let's automatically hash the password after validation, to simplify the call to the create() method:

src/modules/user.routes.ts
...
+ const signUpPayload = z.object({
+ email: z.string().email(),
+ password: z
+ .string()
+ .min(1)
+ .transform(async (raw) => Password.fromRaw(raw)),
+ fullName: z.string().min(1),
+ bio: z.string().optional().default(''),
+ });
+
app.post('/sign-up', async request => {
- const body = request.body as EntityData<User, true>;
-
- if (!body.email || !body.fullName || !body.password) {
- throw new Error('One of required fields is missing: email, fullName, password');
- }
-
- if ((await db.user.count({ email: body.email })) > 0) {
- throw new Error('This email is already registered, maybe you want to sign in?');
- }
-
- const user = db.user.create({
- fullName: body.fullName,
- email: body.email,
- password: await Password.fromRaw(body.password),
- bio: body.bio ?? '',
- });
+ const body = await signUpPayload.parseAsync(request.body);
+
+ if (await db.user.exists(body.email)) {
+ throw new Error('This email is already registered, maybe you want to sign in?');
+ }
+
+ const user = db.user.create(body);
...

你可以为此添加检查约束(或除了 Zod 之外),但在我们花时间对新密码进行哈希处理后,检查约束将在稍后应用。为了节省时间和服务器资源,避免创建过程过于冗长,你应该尽早进行尽可能多的验证,就像我们在这里所做的那样。

¥You could add a check constraint for that instead (or in addition to Zod), but the check constraint would be applied later, after we spend time to hash the new password. To save time and server resources on long creation procedures like that, you should include as much validation as you can, as early as you can, like we did here.

最后,让我们在 article.routes.ts 中添加一些查询字符串的验证。与我们的注册和登录验证器不同,我们很有可能希望在多个地方进行分页(例如在用户列表中),因此我们应该在专用文件中定义我们的验证器。

¥Finally, let's add some validation for the query string in article.routes.ts. Unlike our sign-up and sign-in validator, there's a high chance we'll want to do paging in multiple places (e.g. in a list of users), so we should define our validator in a dedicated file.

src/common/validators.ts
import { z } from 'zod';

export const pagingParams = z.object({
limit: z.number().int().positive().optional(),
offset: z.number().int().nonnegative().optional(),
});

现在让我们在 "/article" 端点使用它:

¥And now let's use it at the "/article" endpoint:

src/modules/article/article.routes.ts
import { type Services } from '../../db.js';
+import { pagingParams } from '../common/validators.js';
export default (async (app, { db }) => {
app.get('/', async (request) => {
- const { limit, offset } = request.query as {
- limit?: number;
- offset?: number;
- };
+ const { limit, offset } = pagingParams.parse(request.query);
...

对数据库进行向后兼容更改

¥Making backwards compatible changes to the database

接近尾声时,你可能已经注意到,在将用户的电子邮件添加到数据库之前,我们仍然必须检查用户的电子邮件是否存在。然而,在繁忙的服务器上,可能会在我们检查和新用户刷新之间添加用户。此外,如果我们有许多用户,我们需要对表进行线性搜索,因为电子邮件列上没有索引。我们可以添加一个,并且应该使其唯一,以防止在繁忙的服务器上重复插入。

¥Near the end there, you may have noticed that we still had to check whether the user's email exists before adding them to the database. On a busy server however, it's possible for a user to be added right in between our check and the flush of the new user. Further, if we had many users, we would need to do a linear search on the table, as there's no index on the email column. We can add one, and we should make it unique to prevent double insertion on a busy server.

让我们尝试为此生成一个新的迁移。

¥Let's try to generate a new migration for that.

src/migrations/Migration00000000000002.ts
import { Migration } from '@mikro-orm/migrations';

export class Migration00000000000002 extends Migration {

async up(): Promise<void> {
await this.execute(`
ALTER TABLE \`blog\`.\`users\`
ADD UNIQUE INDEX \`email_UNIQUE\` (\`email\` ASC) VISIBLE;
`);
}

async down(): Promise<void> {
await this.execute(`
ALTER TABLE \`blog\`.\`users\`
DROP INDEX \`email_UNIQUE\` ;
`);
}

}

因为此迁移完全向后兼容,并且我们在启动期间自动运行迁移,所以我们甚至可以在不重新生成实体的情况下部署代码。但是,我们无论如何都应该这样做,因为实体定义已因此而改变。

¥Because this migration is fully backwards compatible, and we are automatically running migrations during startup, we could deploy our code without regenerating the entities even. However, we should do that anyway, since the entity definitions have changed as a result of this.

执行此迁移后,当违反该唯一约束时,我们现在可以输出自定义错误。请注意,我们仍应保留在 create 尝试之前执行的应用级别检查。尝试插入将消耗自动增量 ID,即使在唯一约束违规的情况下也是如此,因此为了防止其过早耗尽,我们也应该提前检查。

¥After this migration is executed, we may now output a custom error when that unique constraint is violated. Note that we should still keep the application level check, performed before the create attempt. Attempting to insert will consume the auto increment ID even on unique constraint violations, so to prevent its early exhaustion, we should check in advance as well.

让我们首先添加自定义错误类:

¥Let's add that custom error class first:

src/modules/user/duplicate.error.ts
export class DuplicateUserError extends Error {}

然后封装注册时违反唯一约束的情况:

¥And then wrap violations of the unique constraint on sign-up:

src/modules/user/user.routes.ts
...
+import { DuplicateUserError } from './duplicate.error.js';
...

// register new user
app.post('/sign-up', async request => {
const body = await signUpValidator.parseAsync(request.body);
if (await db.user.exists(body.email)) {
- throw new Error('This email is already registered, maybe you want to sign in?');
+ throw new DuplicateUserError('This email is already registered, maybe you want to sign in?');
}
const user = db.user.create(body);

- await db.em.persist(user).flush();
-
- // after flush, we have the `user.id` set
- console.log(`User ${user.id} created`);
-
- user.token = app.jwt.sign({ id: user.id });
+ try {
+ await db.em.persist(user).flush();
+
+ // after flush, we have the `user.id` set
+ console.log(`User ${user.id} created`);
+
+ user.token = app.jwt.sign({ id: user.id });
+
+ return user;
+ } catch (e: unknown) {
+ if (e instanceof UniqueConstraintViolationException) {
+ throw new DuplicateUserError(
+ 'This email is already registered, maybe you want to sign in?',
+ { cause: e },
+ );
+ }
+ throw e;
+ }
});
...

最后,我们可以再次针对此错误返回不同的状态代码。状态 409 Conflict 似乎是最合适的。

¥And finally, we can return a different status code on this error again. Status 409 Conflict seems like the most appropriate.

src/modules/common/hooks.ts
...
import { AuthError } from './auth.error.js';
+import { DuplicateUserError } from '../user/duplicate.error.js';
...
app.setErrorHandler((error, request, reply) => {
if (error instanceof AuthError) {
return reply.status(401).send(error);
}

// we also handle not found errors automatically
// `NotFoundError` is an error thrown by the ORM via `em.findOneOrFail()` method
if (error instanceof NotFoundError) {
return reply.status(404).send(error);
}
+
+ if (error instanceof DuplicateUserError) {
+ return reply.status(409).send(error);
+ }
...

我们现在可以相应地调整我们的测试:

¥We can now adjust our test accordingly:

test/user.test.ts
...
- expect(res1dup.statusCode).toBe(500);
+ expect(res1dup.statusCode).toBe(409);
expect(res1dup.json()).toMatchObject({
message: 'This email is already registered, maybe you want to sign in?',
});
...
练习

你可以尝试通过分析 UniqueConstraintViolationExceptionsqlMessage 属性并根据解析结果搜索元数据来将其概括为适用于任何唯一约束违规。然后,你可以生成错误消息,将客户端指向导致重复条目的一个或多个属性。这样做会将你与 SQL 驱动程序甚至数据库引擎版本绑定在一起,因此如果你选择这种方式,你应该小心谨慎。除了错误条件本身之外,还要对解析进行单元测试,并确保在数据库升级后运行所有测试。如果错误消息已更改,你将需要支持解析两种形式,直到生产数据库更新为止。

¥You can try to generalize this to work for any unique constraint violation by analyzing the sqlMessage property of UniqueConstraintViolationException and searching the metadata based on the parsing results. You can then produce error messages that point the client to the one or multiple properties that causes a duplicate entry. Doing so will tie you to your SQL driver, and possibly even database engine version, so if you go on this route, you should do so with care. Do unit tests for your parsing in addition to your error conditions themselves, and make sure to run all tests after an upgrade of the database. If the error messages have changed, you will want to support parsing both forms until your production database is updated.

重复使用用户身份验证检查

¥Reusing the user authentication check

我们有 "/profile" 端点,我们在那里检查用户是否经过身份验证,如果是,则返回它。为了需要进行身份验证的其他端点,我们应该将其提取到一个函数中,该函数要么为我们提供当前用户,要么抛出。

¥We have the "/profile" endpoint, where we check whether the user is authenticated and return it if it is. For the sake of other endpoints that need to be authenticated, we should extract this into a function that will either give us the current user, or throw.

src/modules/common/utils.ts
import { FastifyRequest } from 'fastify';
import { type User } from '../user/user.entity.js';
import { AuthError } from './auth.error.ts';

export function getUserFromToken(req: FastifyRequest): User {
if (!req.user) {
throw new AuthError('Please provide your token via Authorization header');
}

return req.user as User;
}

我们已经可以调整 user.routes.ts 文件以使用它:

¥and we can already adjust the user.routes.ts file to use it:

src/modules/user/user.routes.ts
...
-import { AuthError } from '../common/auth.error.js';
+import { getUserFromToken } from '../common/utils.js';
...
app.get('/profile', async request => {
- if (!request.user) {
- throw new AuthError('Please provide your token via Authorization header');
- }
-
- return request.user as User;
+ return getUserFromToken(request);
});

模块化配置

¥Modularizing the configuration

我们的 mikro-orm.config.ts 文件已经增长了很多,甚至在 "检查点 2" 和现在之间也增长了一点。随着实体数量和你可能希望进行的修改数量的增加,你可能需要将相关修改放入专用文件中,然后让 mikro-orm.config.ts 收集并应用它们。具体如何执行取决于你的项目和需求。

¥Our mikro-orm.config.ts file has already grown quite a lot, and grew a bit even between "Checkpoint 2" and now. As the number of your entities and the number of modifications you may wish to do grows, you may need to put related modifications into dedicated files, and just let mikro-orm.config.ts collect and apply them. Exactly how you do that depends on your project and your needs.

我们将实现一个类似于我们迄今为止所做的组织,并为每个模块创建文件,其中包含所有与模块相关的修改,并使用后缀来表示其用途。让我们使用后缀 *.gen.ts。每个这样的文件都会有一个 GenerateOptions 类型的默认导出。对于每种方法,我们将应用该方法(如果已定义)。我们将添加一个特殊情况,即从我们的文件返回 fileName 的空字符串值以表示 "不要使用此结果,请尝试下一个"。如果我们要在主配置中执行此操作,实体生成器将很乐意在基本文件夹中创建一个没有名称和 .ts 扩展名的文件,但我们知道我们不需要它。

¥We will implement an organization similar to what we have been doing so far, and create files per module with all module related modifications, under a suffix to denote its purpose. Let's use the suffix *.gen.ts. Each such file will have a default export that is of type GenerateOptions. For each method, we will apply that method, if defined. We'll add a special case the empty string return value of fileName from our files to mean "Don't use this result, try next". If we were to do that in the main config, the entity generator will happily create a file with no name and .ts extension in the base folder, but we know we don't need that.

所以,让我们添加

¥So, let's add

src/modules/user/user.gen.ts
import type { GenerateOptions } from "@mikro-orm/core";

const settings: GenerateOptions = {
fileName: (entityName) => {
switch (entityName) {
case 'UserRepository':
return `user/user.repository`;
case 'User':
return `user/${entityName.toLowerCase()}.entity`;
case 'Password':
return `user/password.runtimeType`;
case 'PasswordType':
return `user/password.type`;
}
return '';
},
onInitialMetadata: (metadata, platform) => {
const userEntity = metadata.find(meta => meta.className === 'User');
if (userEntity) {
userEntity.repositoryClass = 'UserRepository';
userEntity.addProperty({
persist: false,
name: 'token',
nullable: true,
default: null,
defaultRaw: 'null',
fieldNames: [platform.getConfig().getNamingStrategy().propertyToColumnName('token')],
columnTypes: ['varchar(255)'],
type: 'string',
runtimeType: 'string',
});
const passwordProp = userEntity.properties.password;
passwordProp.hidden = true;
passwordProp.lazy = true;
passwordProp.type = 'PasswordType';
passwordProp.runtimeType = 'Password';
}
}
};
export default settings;

and

src/modules/article/article.gen.ts
import type { GenerateOptions } from "@mikro-orm/core";

const settings: GenerateOptions = {
fileName: (entityName) => {
switch (entityName) {
case '_Article':
return `article/article.entity`;
case 'Article':
return `article/article.customEntity`;
case 'ArticleTag':
case 'Tag':
case 'Comment':
return `article/${entityName.toLowerCase()}.entity`;
}
return '';
},
onInitialMetadata: (metadata, platform) => {
const articleEntity = metadata.find(meta => meta.className === 'Article');
if (articleEntity) {
const textProp = articleEntity.properties.text;
textProp.lazy = true;
}
},
onProcessedMetadata: (metadata, platform) => {
const articleEntity = metadata.find(meta => meta.className === 'Article');
if (articleEntity) {
articleEntity.className = '_Article';
articleEntity.abstract = true;
}
},
};
export default settings;

最后,让我们将它们连接到我们的配置中。我们将使用 globby 来匹配与配置本身相关的所有 *.gen.ts 文件。我们使用 globby,因为它已经存在 - 它是 MikroORM 通过路径搜索实体时使用的。我们将在顶部预先过滤结果,以便实际实体处理速度更快。我们的完整配置如下:

¥And finally, let's hook them up in our config. We'll use globby to match all *.gen.ts files relative to the config itself. We're using globby, because it is already present - it is what MikroORM uses when searching for entities by a path. We'll pre-filter the results at the top, so that the actual entity processing is faster. Our full config is thus:

src/mikro-orm.config.ts
import {
defineConfig,
type MikroORMOptions,
} from '@mikro-orm/mysql';
import { UnderscoreNamingStrategy, type GenerateOptions } from '@mikro-orm/core';
import { Migrator } from '@mikro-orm/migrations';
import pluralize from 'pluralize';
import { join, dirname } from 'node:path';
import { sync } from 'globby';
import { fileURLToPath } from 'node:url';

const isInMikroOrmCli = process.argv[1]?.endsWith(join('@mikro-orm', 'cli', 'esm')) ?? false;
const isRunningGenerateEntities = isInMikroOrmCli && process.argv[2] === 'generate-entities';

const mikroOrmExtensions: MikroORMOptions['extensions'] = [Migrator];

const fileNameFunctions: NonNullable<GenerateOptions['fileName']>[] = [];
const onInitialMetadataFunctions: NonNullable<GenerateOptions['onInitialMetadata']>[] = [];
const onProcessedMetadataFunctions: NonNullable<GenerateOptions['onProcessedMetadata']>[] = [];

if (isInMikroOrmCli) {
mikroOrmExtensions.push((await import('@mikro-orm/entity-generator')).EntityGenerator);
if (isRunningGenerateEntities) {
const fileDir = dirname(fileURLToPath(import.meta.url));
const genExtensionFiles = sync('./modules/**/*.gen.ts', { cwd: fileDir });
for (const file of genExtensionFiles) {
const genExtension = (await import(file)).default as GenerateOptions;
if (genExtension.fileName) {
fileNameFunctions.push(genExtension.fileName);
}
if (genExtension.onInitialMetadata) {
onInitialMetadataFunctions.push(genExtension.onInitialMetadata);
}
if (genExtension.onProcessedMetadata) {
onProcessedMetadataFunctions.push(genExtension.onProcessedMetadata);
}
}
}
}

export default defineConfig({
extensions: mikroOrmExtensions,
multipleStatements: isInMikroOrmCli,
discovery: {
warnWhenNoEntities: !isInMikroOrmCli,
},
entities: isRunningGenerateEntities ? [] : ['dist/**/*.customEntity.js', 'dist/**/*.entity.js'],
entitiesTs: isRunningGenerateEntities ? [] : ['src/**/*.customEntity.ts', 'src/**/*.entity.ts'],
host: 'localhost',
user: 'root',
password: '',
dbName: 'blog',
// enable debug mode to log SQL queries and discovery information
debug: true,
migrations: {
path: 'dist/migrations',
pathTs: 'src/migrations',
},
namingStrategy: class extends UnderscoreNamingStrategy {
override getEntityName(tableName: string, schemaName?: string): string {
return pluralize.singular(super.getEntityName(tableName, schemaName));
}
},
entityGenerator: {
scalarTypeInDecorator: true,
fileName: (entityName) => {
for (const f of fileNameFunctions) {
const r = f(entityName);
if (r === '') {
continue;
}
return r;
}
return `common/${entityName.toLowerCase()}.entity`;
},
onInitialMetadata: (metadata, platform) => {
return Promise.all(onInitialMetadataFunctions.map(f => f(metadata, platform))).then();
},
onProcessedMetadata: (metadata, platform) => {
return Promise.all(onProcessedMetadataFunctions.map(f => f(metadata, platform))).then();
},
save: true,
path: 'src/modules',
esmImport: true,
outputPurePivotTables: true,
readOnlyPivotTables: true,
bidirectionalRelations: true,
customBaseEntityName: 'Base',
useCoreBaseEntity: true,
},
});

此时的重新生成应该产生与我们迄今为止所得到的结果没有什么不同的结果。但现在你可以添加额外的 *.gen.ts 文件,每个文件修改某些实体的某些方面。

¥Regeneration at this point should produce results no different from what we've had so far. But you can now add extra *.gen.ts files, each modifying some aspect of some entities.

练习

你可以尝试实现不同的模式来处理 *.gen.ts 文件,例如根据表名接受给定实体的元数据,并在这些扩展期间根据需要注册文件名条目。当第一次调用 fileName 时,onInitialMetadataonProcessedMetadata 已经完成执行,因此它们可以确定其行为。在大多数情况下,这样做可能有点过头了,但如果你有想要应用于不同模式的扩展,而不仅仅是你完全控制的模式,这可能会有所帮助。

¥You can try to implement a different pattern for handling the *.gen.ts files, such as accepting the metadata of a given entity based on the table name, and register file name entries as needed during those extensions. By the time fileName is first called, onInitialMetadata and onProcessedMetadata have already finished executing, so they can determine its behavior. Doing this is probably overkill for most cases, but it may be helpful if you have extensions that you want to apply on different schemas, not just one you are fully in control of.

⛳ 检查点 3

¥⛳ Checkpoint 3

我们的应用现在结构类似于企业级应用,已准备好添加更多模块或对现有模块进行进一步添加。我们甚至在此过程中进行了另一次迁移。我们现在准备添加更多功能。

¥Our application is now structured like an enterprise level application, ready for further modules or further additions to the existing modules. We even made another migration along the way. We are now ready to add more features.

完成项目

¥Completing the project

添加剩余的文章端点

¥Add the remaining article endpoints

让我们添加剩余的文章端点。

¥Let's add the remaining article endpoints.

让我们从关于按 slug 查看文章并添加评论开始:

¥Let's start with one about viewing an article by slug and adding a comment:

src/modules/article/article.routes.ts
// rest of the code

const articleBySlugParams = z.object({
slug: z.string().min(1),
});

app.get('/:slug', async request => {
const { slug } = articleBySlugParams.parse(request.params);
return db.article.findOneOrFail({ slug }, {
populate: ['author', 'commentCollection.author', 'text'],
});
});

const articleCommentPayload = z.object({
text: z.string().min(1),
});

app.post('/:slug/comment', async request => {
const { slug } = articleBySlugParams.parse(request.params);
const { text } = articleCommentPayload.parse(request.body);
const author = getUserFromToken(request);
const article = await db.article.findOneOrFail({ slug });
const comment = db.comment.create({ author, article, text });

// We can add the comment to `article.comments` collection,
// but in fact it is a no-op, as it will be automatically
// propagated by setting Comment.author property.
article.commentCollection.add(comment);

// mention we don't need to persist anything explicitly
await db.em.flush();

return comment;
});

// rest of the code

如果你将代码与 "代码优先" 指南中的等效代码进行比较,你会注意到我们添加了 Zod 进行一些基本验证。此外,生成器使用名称 "commentCollection" 来表示与 "comments" 表的关系。默认值由实体名称组成,结合后缀 "集合"(用于 1:N 关系)或 "Inverse"(用于 M:N 关系)。如果愿意,我们可以通过覆盖 inverseSideName 来调整命名策略(例如,通过获取实体名称并使用 pluralize 将其转换为复数),但为了避免与表本身中定义的属性发生潜在冲突,让我们保持原样。我们更有可能用实体的复数形式命名列,而不是在末尾用 "_collection" 或 "_inverse" 命名,这样在当前形式下发生冲突的可能性就更小。

¥If you compare the code with the equivalent from the "code first" guide, you will notice that we've added Zod for some basic validation. Also, the generator used the name "commentCollection" to represent the relation to the "comments" table. The default is formed from the entity name, combined with the suffix "Collection" for 1:N relations, or "Inverse" for M:N relations. We could adjust that in the naming strategy by overriding inverseSideName if we'd like (e.g. by taking the entity name and converting it to plural with pluralize), but to avoid potential conflicts with properties defined in the table itself, let's keep it as is. We're more likely to name a column with the plural form of an entity than we are to name it with "_collection" or "_inverse" at the end, making conflicts less likely in their current form.

接下来,让我们尝试添加文章创建端点:

¥Next, let's try to add the article creation endpoint:

src/modules/article/article.routes.ts
// rest of the code above

const newArticlePayload = z.object({
title: z.string().min(1),
text: z.string().min(1),
description: z.string().min(1).optional(),
});

app.post('/', async request => {
const { title, text, description } = newArticlePayload.parse(request.body);
const author = getUserFromToken(request);
const article = db.article.create({
title,
text,
author,
description,
});

await db.em.flush();

return article;
});
// rest of the code

你应该看到类型错误。这是因为我们的实体将 slug 和 description 声明为必需属性。这里有三种解决方案。第一个可能的解决方案是直接使用文章构造函数并持久化新实体。

¥You should be seeing a type error. This is because our entity declares slug and description as required properties. There are three solutions here. The first possible solution is to use the article constructor directly and persist the new entity.

第二个是为 article 创建自定义实体存储库,我们在其中重写 create 方法或添加一个调用构造函数并持久化新实体的自定义方法。我们将跳过显示这些解决方案。

¥The second is to create a custom entity repository for article, in which we override the create method or add a custom one that calls the constructor and persists the new entity. We'll skip showing these solutions.

练习

尝试实现这些解决方案。一旦你可以构建应用,请立即退后一步。

¥Try to implement these solutions as well. Step back as soon as you can build the application.

第三个是将这些属性声明为可选。做到这一点的最佳方法是在我们映射的超类中将它们声明为可选。

¥And the third one is to declare those properties as optional. The best way to do that is to declare them as optional in our mapped superclass.

src/modules/article/article.customEntity.ts
-import { Entity, type Rel } from '@mikro-orm/core';
+import { Entity, OptionalProps, type Rel } from '@mikro-orm/core';
import { _Article } from './article.entity.js';
import { User } from '../user/user.entity.js';

function convertToSlug(text: string) {
return text
.toLowerCase()
.replace(/[^\w ]+/g, '')
.replace(/ +/g, '-');
}

@Entity({ tableName: 'articles' })
export class Article extends _Article {
+
+ [OptionalProps]?: 'slug' | 'description';

constructor(title: string, text: string, author: Rel<User>) {
super();
this.title = title;
this.text = text;
this.author = author;
this.slug = convertToSlug(title);
this.description = this.text.substring(0, 999) + '…';
}

}

从技术上讲,我们可以通过在 onInitialMetadataonProcessedMetadata 中修改基类将它们声明为可选,但如果我们出于某种原因想要绕过超类,我们将容易因缺少 slug 和描述而出错。

¥Technically, we could declare them as optional in the base class by modifying that in onInitialMetadata or onProcessedMetadata, but if we ever want to bypass the superclass for whatever reason, we will be prone to errors from the missing slug and description.

修改映射的超类后,代码现在再次编译。

¥After the modifications to the mapped superclass, the code now compiles again.

对于我们的下两个端点,我们希望确保只有文章的作者可以更新和删除它。检查本身很简单,但如果用户与文章作者不同,我们可以抛出一个单独的错误,导致 403 Forbidden

¥For our next two endpoints, we'll want to ensure only the author of an article can update and delete it. The check itself is trivial, but let's make it so that we throw a separate error that results in 403 Forbidden if the user is different from the author of an article.

我们来添加错误:

¥Let's add the error:

src/modules/common/disallowed.error.ts
export class DisallowedError extends Error {}

并在 hooks.ts 中添加处理:

¥And add handling for it in hooks.ts:

src/modules/common/hooks.ts
...
import { AuthError } from './auth.error.js';
+import { DisallowedError } from './disallowed.error.js';
...
app.setErrorHandler((error, request, reply) => {
if (error instanceof AuthError) {
return reply.status(401).send(error);
}

+ if (error instanceof DisallowedError) {
+ return reply.status(403).send(error);
+ }
...

现在我们准备添加文章端点以按 ID 更新和删除文章:

¥And we're now ready to add the article endpoints to update and remove and article by ID:

src/modules/article/article.routes.ts
// rest of the imports
import { DisallowedError } from '../common/disallowed.error.js';
import { wrap } from '@mikro-orm/mysql';

// rest of the code

const articleByIdParams = z.object({
id: z.coerce.number().int().positive()
});
const updateArticlePayload = newArticlePayload.partial().extend({
slug: z.string().min(1).optional(),
});

app.patch('/:id', async request => {
const user = getUserFromToken(request);
const { id } = articleByIdParams.parse(request.params);
const article = await db.article.findOneOrFail(id);
if (article.author !== user) {
throw new DisallowedError('Only the author of an article is allowed to update it');
}

wrap(article).assign(updateArticlePayload.parse(request.body));
await db.em.flush();

return article;
});

app.delete('/:id', async request => {
const user = getUserFromToken(request);
const { id } = articleByIdParams.parse(request.params);
const article = await db.article.findOneOrFail(id);
if (article.author !== user) {
throw new DisallowedError('Only the author of an article is allowed to delete it');
}

// mention `nativeDelete` alternative if we don't care about validations much
await db.em.remove(article).flush();

return { success: true };
});
// rest of the code

感谢 MikroORM 的身份映射,我们可以像上面那样比较对象。

¥Thanks to MikroORM's identity map, we can compare the objects like how we've done above.

我们还添加一个端点来更新我们的用户个人资料:

¥Let's also add an endpoint to update our user profile:

src/modules/user/user.routes.ts
...
-import { UniqueConstraintViolationException } from '@mikro-orm/mysql';
+import { UniqueConstraintViolationException, wrap } from '@mikro-orm/mysql';
...
+ const profileUpdatePayload = signUpPayload.partial();
+
+ app.patch('/profile', async (request) => {
+ const user = getUserFromToken(request);
+ wrap(user).assign(profileUpdatePayload.parse(request.body));
+ await db.em.flush();
+ return user;
+ });
练习

为这些新端点添加单元测试。

¥Add unit tests for those new endpoints.

可嵌入实体

¥Embeddable entities

MikroORM 提供可嵌入对象,可用于两种用途之一。

¥MikroORM offers embeddable objects, which can serve one of two purposes.

  1. 将相关列分组到属性下的表中。

    ¥Group related columns in a table under a property.

  2. 为查询 JSON 列提供更像实体的体验。

    ¥Provide a more entity-like experience to querying JSON columns.

当实体生成器在元数据中编码时,它们足够强大,可以输出此类实体。但是,我们需要大量修改元数据以添加新的可嵌入实体并添加对它们的引用。手动编写可嵌入实体也是完全有效的,只需在实体生成期间添加对它们的引用即可。我们将探索两种类型的可嵌入对象以及生成它们的两种方式。

¥The entity generator is powerful enough to output such entities, when they are encoded in the metadata. However, we need to heavily alter the metadata to add new embeddable entities and add references to them. It is also perfectly valid to write embeddable entities manually, and just add references to them during entity generation. We'll explore both types of embeddables and both ways of generating them.

可嵌入为一组列

¥Embeddable as a group of columns

首先,用于列的分组。在我们的大多数实体中,我们有 "created_at" 和 "updated_at" 列,但不是全部(例如:数据透视表)。让我们制定一个策略,向任何具有此类列的实体添加可选的 "_track" 属性。该属性将是一个具有这两个字段的可嵌入对象。我们还将从其原始属性中删除它们,只保留可嵌入对象中的副本。为简单起见,我们假设所有这些列的类型和默认值都是正确的。

¥First, for the grouping of columns. In most of our entities, we have "created_at" and "updated_at" columns, but not quite all of them (case in point: the pivot tables). Let's make it a policy to add an optional "_track" property to any entity with such columns. That property will be an embeddable object having those two fields. We'll also remove them from their original properties, keeping only the copy in the embeddable object. For simplicity, we'll assume the type and defaults of all such columns are correct.

通常,可嵌入对象映射到使用属性作为前缀形成的列。在我们的例子中,那将是 "track_created_at" 和 "track_updated_at"。我们不想这样,所以我们将 prefix 选项设置为 false,这样最后我们仍然映射到 created_atupdated_at

¥Normally, embeddable objects map to a column formed by using the property as a prefix. In our case, that would be "track_created_at" and "track_updated_at". We don't want that, so we will set the prefix option to false, so that in the end, we still map to created_at and updated_at.

src/modules/common/track.gen.ts
import { EntityMetadata, ReferenceKind, type GenerateOptions } from '@mikro-orm/core';

const settings: GenerateOptions = {
onInitialMetadata: (metadata, platform) => {
for (const meta of metadata) {
if (
typeof meta.properties.createdAt !== 'undefined' &&
typeof meta.properties.updatedAt !== 'undefined'
) {
meta.removeProperty('createdAt', false);
meta.removeProperty('updatedAt', false);
meta.addProperty(
{
name: '_track',
kind: ReferenceKind.EMBEDDED,
optional: true,
nullable: true,
type: 'Track',
runtimeType: 'Track',
prefix: false,
object: false,
},
false,
);
meta.sync();
}
}

const trackClass = new EntityMetadata({
className: 'Track',
tableName: 'track',
embeddable: true,
relations: [],
});
trackClass.addProperty(
{
name: 'createdAt',
fieldNames: ['created_at'],
columnTypes: ['datetime'],
type: 'datetime',
runtimeType: 'Date',
defaultRaw: 'CURRENT_TIMESTAMP',
},
false,
);
trackClass.addProperty(
{
name: 'updatedAt',
fieldNames: ['updated_at'],
columnTypes: ['datetime'],
type: 'datetime',
runtimeType: 'Date',
defaultRaw: 'CURRENT_TIMESTAMP',
},
false,
);
trackClass.sync();

metadata.push(trackClass);
},
};
export default settings;

如果你现在重新生成实体,你将看到已创建 "src/modules/common/track.entity.ts",并且其他类现在正在引用它。由于我们正在动态创建类,因此我们可以使用 *.entity.ts 扩展保存它。

¥If you regenerate the entities now, you'll see "src/modules/common/track.entity.ts" created, and other classes are now referencing it. Since we are creating the class dynamically, we can keep it saved with an *.entity.ts extension.

警告

在处理较大的项目并进行类似的修改时,你应该对列类型、可空性和默认值进行额外检查。仅当所有元数据与可嵌入内容一致时才采取行动对列进行分组。否则,请保留属性。在编写迁移期间可能会发生错误。你的实体生成扩展可以(并且应该)对此类错误具有弹性。如果在输出中没有得到你期望的修改,则会提醒你存在这样的错误。

¥When working on bigger projects and doing similar modifications, you should do extra checks on the column type, nullability, and default value. Take action to group columns only when all of their metadata lines up with what you have in the embeddable. Otherwise, leave the properties alone. Mistakes can happen during the authoring of migrations. Your entity generation extensions can (and should) be made resilient towards such mistakes. Not getting the modification in the output when you expect it will alert you that there is such a mistake.

可嵌入为 JSON 列的类型

¥Embeddable as a type of JSON column

我们已经探索了自定义类型,也可以将它们用于 JSON 列。但这样做意味着你选择退出 MikroORM 辅助查询 JSON 中的属性。对于自定义类型,就 MikroORM 而言,JSON 列只是一个随机对象,只有在获取它之后才能将其作为对象处理。当然,你始终可以将查询写入 JSON 属性(它仍然是 JSON 列),但在 MikroORM 调用中,IDE 不会自动补齐对象成员名称。可嵌入属性则不然。

¥We explored custom types already, and can use them for JSON columns as well. But doing so means you opt out from MikroORM assisted queries to properties within the JSON. With a custom type, the JSON column is just a random object as far as MikroORM is concerned, and you only get to deal with it as an object after having fetched it. You can always write queries to JSON properties, of course (it's still a JSON column), but there will be no auto complete for object member names in your IDE within MikroORM calls. Not so with embeddable properties.

我们的模式目前缺少任何 JSON 属性。我们来添加一个。例如,我们可以使用一个来存储用户的社交媒体账户。

¥Our schema is currently lacking any JSON properties. Let's add one. We can use one to store social media accounts of users, for example.

让我们首先添加迁移并重新生成我们的实体以包含 "social" 属性。

¥Let's start by adding migration and regenerating our entities to include the "social" property.

src/migrations/Migration00000000000003.ts
import { Migration } from '@mikro-orm/migrations';

export class Migration00000000000003 extends Migration {

async up(): Promise<void> {
await this.execute(`
ALTER TABLE \`users\`
ADD COLUMN \`social\` JSON NULL DEFAULT NULL AFTER \`bio\`;
`);
}

async down(): Promise<void> {
await this.execute(`
ALTER TABLE \`users\`
DROP COLUMN \`social\`;
`);
}

}

执行迁移并重新生成实体。你应该看到使用运行时类型 "any" 定义的 "social" 列。好的,是时候添加可嵌入的了。

¥Execute the migration and regenerate the entities. You should see the "social" column defined with runtime type "any". OK, time to add the embeddable.

我们可以手动定义可嵌入类:

¥We can define the embeddable class manually:

src/modules/user/social.customEntity.ts
import { Embeddable, Property, type Opt } from "@mikro-orm/mysql";

@Embeddable()
export class Social {

@Property({ type: 'string' })
twitter!: string & Opt;

@Property({ type: 'string' })
facebook!: string & Opt;

@Property({ type: 'string' })
linkedin!: string & Opt;

}

公平地说,这个类足够简单,我们也可以动态定义它。但是,如果你想添加辅助方法(例如,获取完整链接,而不仅仅是用户名),你可能需要手动定义它。

¥In fairness, this class is simple enough that we may as well define it dynamically. But if you'd like to add helper methods (e.g. for getting the full link, our of just a username), you may want to define it manually.

现在,让我们修改我们的 user.gen.ts 以引用可嵌入的:

¥Now, let's modify our user.gen.ts to reference the embeddable:

src/modules/user/user.gen.ts
-import { type GenerateOptions } from "@mikro-orm/core";
+import { ReferenceKind, type GenerateOptions } from "@mikro-orm/core";
...
case 'PasswordType':
return `user/password.type`;
+ case 'Social':
+ return `user/social.customEntity`;
...
passwordProp.runtimeType = 'Password';
+
+ const socialProp = userEntity.properties.social;
+ socialProp.kind = ReferenceKind.EMBEDDED;
+ socialProp.type = 'Social';
+ socialProp.prefix = false;
+ socialProp.object = true;
...

再次重新生成实体,我们现在有可嵌入的内容来表示 JSON 的内容列。与列组一样,相关 JSON 属性有一个前缀,但通过将 "prefix" 设置为 "false",我们可以确保实体中的 props 映射到 JSON 列中的相同属性。"object" 选项设置为 "true" 是我们如何设置该属性来表示 JSON 列,而不是一组列。

¥Regenerating the entities again, we now have the embeddable representing the contents of the JSON column. Just as with column groups, there is a prefix for the related JSON properties, but by setting "prefix" to "false", we can ensure the props in our entities map to the same properties in the JSON column. And the "object" option being set to "true" is how we set that property to represent a JSON column, rather than a group of columns.

练习

尝试向 JSON 列添加检查约束。可嵌入性有助于确保你的应用不会接触到未知属性,或者能够将未知属性输入数据库。但是,对数据库的直接查询可能会插入不具有所需形状的对象。更糟糕的是,它们可能设置相同的属性,但内部的数据类型不同。如果读出,最终可能会导致你的应用崩溃。至少检查已知属性的检查约束将消除任何可能性。

¥Try to add a check constraint to the JSON column too. The embeddable helps ensure your application won't get exposed to unknown properties, or be able to enter unknown properties into the database. However, direct queries to your database may insert objects that won't have the required shape. Worse still, they may set the same properties, but with a different data type inside. That may ultimately crash your application if read out. A check constraint that at least checks the known properties will remove any possibility of that.

我们应该更新我们的用户端点以接受这个新属性:

¥We should update our user endpoints to accept this new property:

src/modules/user/user.routes.ts
  const signUpPayload = z.object({
email: z.string().email(),
password: z
.string()
.min(1)
.transform(async (raw) => Password.fromRaw(raw)),
fullName: z.string().min(1),
bio: z.string().optional().default(''),
+ social: z
+ .object({
+ twitter: z.string().min(1).optional(),
+ facebook: z.string().min(1).optional(),
+ linkedin: z.string().min(1).optional(),
+ })
+ .optional(),
});

公式属性

¥Formula properties

有时,你希望每行包含一些动态数据。当你需要的所有数据都在行中时,生成的列可以提供帮助,但是当你想要包含一些非确定性的东西(如当前时间)或其他表中的东西(可能是聚合)作为值时怎么办?在 SQL 中,这可以通过 SELECT 子句中的子查询来完成。MikroORM 允许你将此类子查询定义为 @Formula 修饰属性。这些不是从数据库中推断出来的,但你可以使用 onInitialMetadataonProcessedMetadata 添加这些。

¥Sometimes, you want to include some dynamic data per row. Generated columns can help when all the data you need is in the row, but what about when you want to include something non-deterministic (like the current time) or something from other tables (an aggregation perhaps) as the value? In SQL, this can be done with a subquery in your SELECT clause. MikroORM allows you to define such subqueries as @Formula decorated properties. These are not inferred from the database, but you can add such with onInitialMetadata and onProcessedMetadata.

让我们以这种方式向文章列表添加评论计数。我们将使该属性变得懒惰,这样我们就不必每次获得文章时都计算它。

¥Let's add a count of comments to the article listing in that fashion. We'll make the property lazy, so that we don't necessarily compute it every time we get an article.

src/modules/article/article.gen.ts
...
textProp.lazy = true;
+
+ articleEntity.addProperty({
+ name: 'commentsCount',
+ fieldNames: ['comments_count'],
+ columnTypes: ['INT'],
+ unsigned: true,
+ optional: true,
+ type: 'integer',
+ runtimeType: 'number',
+ default: 0,
+ lazy: true,
+ formula: (alias) => `(SELECT COUNT(*) FROM comments WHERE article = ${alias}.id)`,
+ });
...

让我们重新生成实体。最后,让我们确保将其添加到列表中:

¥And let's regenerate the entities. Lastly, let's ensure we add it to the listing:

src/modules/article/article.routes.ts
...
const [items, total] = await db.article.findAndCount(
{},
{
+ populate: ['commentsCount'],
limit,
offset,
},
);
...

在实体生成期间使用查询构建器

¥Using the query builder during entity generation

请注意,我们必须将完整的查询写成字符串。那么如果我们将来重命​​名列会发生什么?没有任何类型的实体生成或构建错误。如果我们填充属性,我们只会在运行时收到错误。这不好。我们可以通过基于元数据动态构建查询来解决这个问题,并在最后进行别名的最终替换。这将确保如果我们重命名了所涉及的表或列,我们将在实体生成期间收到错误。

¥Notice that we had to write the full query as a string. So what happens if we rename the columns in the future? No entity generation or build errors of any kind. We would only get an error at runtime, if we populate the property. That's not good. We can remedy that by constructing the query dynamically based on the metadata, and do the final replacement of the alias at the end. This will ensure we get errors during entity generation if we've renamed the involved table or columns.

src/modules/article/article.gen.ts
import type { GenerateOptions } from '@mikro-orm/core';
+import { type SqlEntityManager, Utils } from '@mikro-orm/mysql';
...
textProp.lazy = true;

+ const commentEntity = metadata.find((meta) => meta.className === 'Comment');
+ if (!commentEntity) {
+ return;
+ }
+ const em = (platform.getConfig().getDriver().createEntityManager() as SqlEntityManager);
+ const qb = em.getKnex().queryBuilder().count().from(commentEntity.tableName).where(
+ commentEntity.properties.article.fieldNames[0],
+ '=',
+ em.getKnex().raw('??.??', [em.getKnex().raw('??'), commentEntity.properties.id.fieldNames[0]])
+ );
+ const formula = Utils.createFunction(
+ new Map(),
+ `return (alias) => ${JSON.stringify(`(${qb.toSQL().sql})`)}.replaceAll('??', alias)`
+ );
+
articleEntity.addProperty({
name: 'commentsCount',
fieldNames: ['comments_count'],
columnTypes: ['INT'],
unsigned: true,
optional: true,
type: 'integer',
runtimeType: 'number',
default: 0,
lazy: true,
- formula: (alias) => `(SELECT COUNT(*) FROM comments WHERE article = ${alias}.id)`,
+ formula,
});

如果你现在重新生成实体,你将在输出中看到与我们之前略有不同的函数,但它仍然执行相同的工作。有了这个,如果我们将所涉及的列从 articlescomments 重命名,实体生成器就会出错。如果我们删除或重命名 Comment 实体,生成将跳过 commentsCount 属性,如果我们引用它,则会产生构建错误。

¥If you regenerate the entities now, you'll see a slightly different function in the output from what we had before, but it still does the same job. With this in place, if we rename the involved columns from articles or comments, the entity generator would error. If we delete or rename the Comment entity, generation would skip the commentsCount property, which would in turn create build errors if we were to reference it.

部署

¥Deployment

在没有 ts-node 的情况下运行

¥Running without ts-node

我们已经添加了一个 "check" 脚本来检查我们的代码而不发出任何内容。让我们实际发出输出,并使用 node 而不是 ts-node 运行它:

¥We have already added a "check" script to check our code without emitting anything. Let's actually emit our output, and run it with node rather than ts-node:

package.json
  "scripts": {
"build": "tsc --build",
"start:prod": "node ./dist/server.js",
...
}

因为我们已经在所有地方注释了 "type",所以应用无需进一步修改即可运行。如果你要使用打包器而不是 tsc,则可能需要进行其他配置。如果打包器正在破坏你的类名和属性名(例如 NextJS 项目默认这样做),你可以调整命名策略以始终生成 tableNamefieldNames 选项(例如,在 classToTableNamepropertyToColumnName 中无条件返回空字符串),然后重新生成实体。这将确保无论 JS 标识符最终如何出现在生产包中,它们都将映射到数据库中正确的表和列。

¥Because we already have "type" annotated everywhere, the application just works without further modifications. If you were to use a bundler instead of tsc, you may need to do additional config. If the bundler is mangling your class names and property names (e.g. NextJS projects do that by default), you may adjust your naming strategy to always generate tableName and fieldNames options (e.g. by unconditionally returning an empty string in classToTableName and propertyToColumnName), and regenerate your entities. This will ensure that no matter how the JS identifiers end up as in the production bundle, they will map to the correct tables and columns in your database.

⛳ 检查点 4

¥⛳ Checkpoint 4

我们的应用已完全准备好部署。你可以随时添加更多功能,优化某些字段的性能,使错误处理更好,更少使用 "as",等等。

¥Our application is fully ready to be deployed. You can always add more features, optimize performance in some areas, make error handling nicer, use "as" even less, and so on.