且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

带有 IntelliJ 和 SBT 的自定义文件夹结构的 Uber jar

更新时间:2023-09-18 13:21:10

您还需要更改受影响文件顶部的 package 关键字后的包名称.但是,如果您使用 IntelliJ 进行重构(通过创建包,然后使用 UI 将文件拖到包中),那么 IntelliJ 会为您执行此操作.

没有其他需要改变的(build.sbt 和相关文件可以保持不变).

最后,记得修改 class 参数以反映入口点位置的变化;你会通过 --class com.test.processing.jobs.job.mytestmain 而不是 --class com.test.processing.jobs.mytestmain.

至于 .gitignore:请查看 示例 gitignore 文件 其中包括:

  • 包含目标"的输出目录
  • IntelliJ 目录,例如.idea"

另一个 gitignore 示例 忽略编译器生成的所有 .class 文件,这是另一种方法.您应该包含所有动态生成的文件,其中的更改对其他开发人员来说无关紧要.

I am fairly new to cloud and SBT/IntelliJ, So trying my luck with IntelliJ & SBT build environment to deploy my jar on data proc cluster.

Here's a screen shot of my project structure:

Code is quite simple with main defined in 'mytestmain' which call another method defined in 'ReadYamlConfiguration' which needed a moultingyaml dependency, which I have included as shown in my build.sbt.

Here's my build.sbt & assembly.sbt file:

lazy val root = (project in file(".")).
  settings(
    name := "MyTestProjectNew",
    version := "0.0.1-SNAPSHOT",
    scalaVersion := "2.11.12",
    mainClass in Compile := Some("com.test.processing.jobs.mytestmain.scala")
  )

libraryDependencies ++= Seq(
  "net.jcazevedo" %% "moultingyaml" % "0.4.2"
)

scalaSource in Compile := baseDirectory.value / "src" 

assembly.sbt file:

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.10")

I created assembly.sbt to create Uber jar in order to include required dependencies and ran 'SBT assembly' from Terminal. It has created a assembly jar file successfully, Which I was able to deploy and run successfully on Dataproc cluster.

gcloud dataproc jobs submit spark 
--cluster my-dataproc-cluster 
--region europe-north1 --class com.test.processing.jobs.mytestmain 
--jars gs://my-test-bucket/spark-jobs/MyTestProjectNew-assembly-0.0.1-SNAPSHOT.jar

Code is working fine as expected with no issues.

Now I would like to have my own custom directory structure as shown below:

For example, I would like to have a folder name as 'spark-job' with a sub dir named as 'SparkDataProcessing' and then src/main/scala folder with packages and respective scala classes and objects etc.

my main method is defined in in package 'job' within 'com.test.processing' package.

What all changes do I need to make in build.sbt? I am looking for a detail explanation with build.sbt as a sample according to my project structure. Also please suggest what all needs to be included in gitignore file.

I am using IntelliJ Idea 2020 community edition and SBT 1.3.3 version. I tried few things here and there but always ended up some issue with structure, jar or build.sbt issues. I was expecting an answer something similar which is done in below post.

Why does my sourceDirectories setting have no effect in sbt?

As you can see in below pic, the source directory has been changed.

spark-jobs/SparkDataProcessing/src/main/Scala

and when I am building this with below path, it's not working.

scalaSource in Compile := baseDirectory.value / "src" 

it works when I keep the default structure. like src/main/scala

You also need to change the package name after the package keyword at the top of affected files. However, if you refactor using IntelliJ (by creating the packages and then dragging the files into the package using the UI), then IntelliJ will do this for you.

Nothing else needs to be changed (build.sbt and related files can stay the same).

Finally, remember to change the class argument to reflect changes in entrypoint locations; you would pass --class com.test.processing.jobs.job.mytestmain instead of --class com.test.processing.jobs.mytestmain.

As for .gitignore: please take a look at an example gitignore file which includes:

  • output directories containing "target"
  • IntelliJ directories such as ".idea"

Another gitignore example ignores all .class files generated by the compiler, another approach. You should include all files which are generated dynamically, where changes do not matter to other developers.