且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何使用 SSDT 和 Visual Studio 2012 数据库项目正确管理数据库部署?

更新时间:2022-12-31 10:27:54

我自己一直在努力解决这个问题,我可以告诉你这并不容易.

I've been working on this myself, and I can tell you it's not easy.

首先,为了解决 JT 的回复 - 即使使用 SSDT 具有的声明性更新机制,您也不能忽略版本".SSDT 在将任何源模式移动到任何目标模式方面做得相当不错"(前提是你知道所有的开关和陷阱),确实这本身不需要版本化,但它不知道如何管理数据运动"(至少不是我能看到的!).因此,就像 DBProj 一样,您可以在 Pre/Post 脚本中使用自己的设备.由于数据移动脚本依赖于已知的开始和结束模式状态,因此您无法避免对数据库进行版本控制.因此,数据移动"脚本必须应用于模式的版本化快照,这意味着您不能随意将数据库从 v1 更新到 v8 并期望数据移动脚本 v2 到 v8 工作(大概,您不会需要一个 v1 数据运动脚本).

First, to address the reply by JT - you cannot dismiss "versions", even with declarative updating mechanics that SSDT has. SSDT does a "pretty decent" job (provided you know all the switches and gotchas) of moving any source schema to any target schema, and it's true that this doesn't require verioning per se, but it has no idea how to manage "data motion" (at least not that i can see!). So, just like DBProj, you left to your own devices in Pre/Post scripts. Because the data motion scripts depend on a known start and end schema state, you cannot avoid versioning the DB. The "data motion" scripts, therefore, must be applied to a versioned snapshot of the schema, which means you cannot arbitrarily update a DB from v1 to v8 and expect the data motion scripts v2 to v8 to work (presumably, you wouldn't need a v1 data motion script).

遗憾的是,我在 SSDT 发布中看不到任何允许我以集成方式处理这种情况的机制.这意味着您必须添加自己的脚手架.

Sadly, I can't see any mechanism in SSDT publishing that allows me to handle this scenario in an integrated way. That means you'll have to add your own scafolding.

第一个技巧是跟踪数据库(和 SSDT 项目)中的版本.我开始在 DBProj 中使用一个技巧,并将其带到 SSDT,经过一些研究,结果其他人也在使用它.您可以将 DB 扩展属性应用于数据库本身(称为BuildVersion"或AppVersion"或类似名称),并将版本值应用于它.然后,您可以在 SSDT 项目本身中捕获此扩展属性,SSDT 会将其添加为脚本(然后您可以选中包含扩展属性的发布选项).然后我使用 SQLCMD 变量来识别当前传递中应用的源版本和目标版本.一旦确定了源(项目快照)和目标(即将更新的目标数据库)之间的版本差异,您就可以找到所有需要应用的快照.遗憾的是,从 SSDT 部署内部很难做到这一点,您可能必须将其移至构建或部署管道(我们使用 TFS 自动化部署并具有自定义操作来执行此操作).

The first trick is to track versions within the database (and SSDT project). I started using a trick in DBProj, and brought it over to SSDT, and after doing some research, it turns out that others are using this too. You can apply a DB Extended Property to the database itself (call it "BuildVersion" or "AppVersion" or something like that), and apply the version value to it. You can then capture this extended property in the SSDT project itself, and SSDT will add it as a script (you can then check the publish option that includes extended properties). I then use SQLCMD variables to identify the source and target versions being applied in the current pass. Once you identify the delta of versions between the source (project snapshot) and target (target db about to be updated), you can find all the snapshots that need to be applied. Sadly, this is tricky to do from inside the SSDT deployment, and you'll probably have to move it to the build or deployment pipeline (we use TFS automated deployments and have custom actions to do this).

下一个障碍是保留架构快照及其关联的数据移动脚本.在这种情况下,它有助于使脚本尽可能具有幂等性(这意味着您可以重新运行脚本而不会产生任何不良副作用).它有助于将可以安全地重新运行的脚本与必须只执行一次的脚本分开.我们对静态参考数据(字典或查找表)做同样的事情——换句话说,我们有一个 MERGE 脚本库(每个表一个),可以保持参考数据同步,这些脚本包含在帖子中- 部署脚本(通过 SQLCMD :r 命令).这里要注意的重要一点是,您必须以正确的顺序执行它们,以防这些参考表中的任何一个彼此具有 FK 引用.我们按顺序将它们包含在主部署后脚本中,这有助于我们创建一个为我们生成这些脚本的工具——它还解决了依赖顺序.我们在版本"结束时运行此生成工具以捕获静态参考数据的当前状态.您所有的其他数据移动脚本基本上都是特殊情况,并且很可能是一次性的.在这种情况下,您可以执行以下两种操作之一:您可以针对 db build/app 版本使用 IF 语句,或者您可以在创建每个快照包后清除 1 次脚本.

The next hurdle is to keep snapshots of the schema with their associated data motion scripts. In this case, it helps to make the scripts as idempotent as possible (meaning, you can rerun the scripts without any ill side-effects). It helps to split scripts that can safely be rerun from scripts that must be executed one time only. We're doing the same thing with static reference data (dictionary or lookup tables) - in other words, we have a library of MERGE scripts (one per table) that keep the reference data in sync, and these scripts are included in the post-deployment scripts (via the SQLCMD :r command). The important thing to note here is that you must execute them in the correct order in case any of these reference tables have FK references to each other. We include them in the main post-deploy script in order, and it helps that we created a tool that generates these scripts for us - it also resolves dependency order. We run this generation tool at the close of a "version" to capture the current state of the static reference data. All your other data motion scripts are basically going to be special-case and most likely will be single-use only. In that case, you can do one of two things: you can use an IF statement against the db build/app version, or you can wipe out the 1 time scripts after creating each snapshot package.

记住 SSDT 会禁用 FK 检查约束,并且仅在部署后脚本运行后重新启用它们,这会有所帮助.例如,这让您有机会填充新的非空字段(顺便说一下,您必须启用为非空列生成临时智能"默认值的选项才能使其工作).但是,仅对 SSDT 由于架构更改而重新创建的表禁用 FK 检查约束.对于其他情况,您有责任确保数据移动脚本以正确的顺序运行,以避免检查约束投诉(或者您在脚本中手动禁用/重新启用它们).

It helps to remember that SSDT will disable FK check constraints and only re-enable them after the post-deployment scripts run. This gives you a chance to populate new non-null fields, for example (by the way, you have to enable the option to generate temporary "smart" defaults for non-null columns to make this work). However, FK check constraints are only disabled for tables that SSDT is recreating because of a schema change. For other cases, you are responsible for ensuring that data motion scripts run in the proper order to avoid check constraints complaints (or you manually have disable/re-enable them in your scripts).

DACPAC 可以帮助您,因为 DACPAC 本质上是一个快照.它将包含几个描述模式的 XML 文件(类似于项目的构建输出),但在您创建它的那一刻被冻结.然后,您可以使用 SQLPACKAGE.EXE 或部署提供程序来发布该包快照.我还没有完全弄清楚如何使用 DACPAC 版本控制,因为它更依赖于注册"数据应用程序,所以我们坚持使用我们自己的版本控制方案,但我们确实将我们自己的版本信息放入了 DACPAC 文件名中.

DACPAC can help you because DACPAC is essentially a snapshot. It will contain several XML files describing the schema (similar to the build output of the project), but frozen in time at the moment you create it. You can then use SQLPACKAGE.EXE or the deploy provider to publish that package snapshot. I haven't quite figured out how to use the DACPAC versioning, because it's more tied to "registered" data apps, so we're stuck with our own versioning scheme, but we do put our own version info into the DACPAC filename.

我希望我能提供一个更有说服力和详尽的例子,但我们仍在解决这里的问题.

I wish I had a more conclusive and exhasutive example to provide, but we're still working out the issues here too.

SSDT 真正糟糕的一件事是与 DBProj 不同,它目前不可扩展.尽管它在许多不同的方面比 DBProj 做得好得多,但您不能覆盖其默认行为,除非您可以在前/后脚本中找到解决问题的方法.我们现在正在努力解决的问题之一是,当您拥有数千万条记录时,重新创建更新表 (CCDR) 的默认方法真的很糟糕.

One thing that really sucks about SSDT is that unlike DBProj, it's currently not extensible. Although it does a much better job than DBProj at a lot of different things, you can't override its default behavior unless you can find some method inside of pre/post scripts of getting around a problem. One of the issues we're trying to resolve right now is that the default method of recreating a table for updates (CCDR) really stinks when you have tens of millions of records.

-更新:我已经有一段时间没有看到这篇文章了,但显然它最近很活跃,所以我想我会添加一些重要的注释:如果您使用的是 VS2012,那么现在 2013 年 6 月发布的 SSDT有一个内置的数据比较工具,还提供了扩展点——也就是说,您现在可以为项目包含构建贡献者和部署计划修改器.

-UPDATE: I haven't seen this post in some time, but apparently it's been active lately, so I thought I'd add a couple of important notes: if you are using VS2012, the June 2013 release of SSDT now has a Data Comparison tool built-in, and also provides extensibility points - that is to say, you can now include Build Contributors and Deployment Plan Modifiers for the project.