且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

ARKIT:使用 PanGesture 移动对象(正确的方式)

更新时间:2023-02-02 15:34:01

简答:要像在 Apple 演示项目中一样获得这种漂亮流畅的拖动效果,您必须像在 Apple 演示项目(处理 3D 交互)中那样进行.另一方面,我同意你的观点,如果你第一次看代码可能会让人困惑.计算放置在地板平面上的物体的正确运动并不容易 - 始终并且从每个位置或视角.这是一个复杂的代码结构,它正在实现这种极好的拖动效果.Apple 在实现这一目标方面做得非常出色,但对我们来说却并不容易.

完整答案:为有需要的人精简 AR 交互模板会导致噩梦 - 但如果您投入足够的时间,也应该有效.如果你更喜欢从头开始,基本上开始使用一个通用的 swift ARKit/SceneKit Xcode 模板(包含太空船的模板).

您还需要 Apple 提供的整个 AR 交互模板项目.(该链接包含在 SO 问题中)最后,您应该能够拖动称为 VirtualObject 的东西,它实际上是一个特殊的 SCNNode.此外,您将拥有一个漂亮的焦点广场,它可用于任何目的 - 例如最初放置对象或添加地板或墙壁.(拖动效果和焦点方块使用的一些代码有点合并或链接在一起 - 没有焦点方块实际上会更复杂)

开始:将以下文件从 AR 交互模板复制到您的空项目:

  • Utilities.swift(通常我将这个文件命名为 Extensions.swift,它包含一些必需的基本扩展)
  • FocusSquare.swift
  • FocusSquareSegment.swift
  • ThresholdPanGesture.swift
  • VirtualObject.swift
  • VirtualObjectLoader.swift
  • VirtualObjectARView.swift

将 UIGestureRecognizerDelegate 添加到 ViewController 类定义中,如下所示:

class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {

将此代码添加到您的 ViewController.swift 中的定义部分,就在 viewDidLoad 之前:

//MARK:用于焦点方块//非常重要:必须以这种方式定义 screenCentervar focusSquare = FocusSquare()var screenCenter: CGPoint {让边界 = sceneView.bounds返回 CGPoint(x: bounds.midX, y: bounds.midY)}var isFocusSquareEnabled : Bool = true//*** 对于对象拖动平移手势 - 苹果 ***///跟踪的屏幕位置,用于更新 `updateObjectToCurrentTrackingPosition()` 中的 `trackedObject` 的位置.私有变量 currentTrackingPosition: CGPoint?/**最近与之交互的对象.`selectedObject` 可以随时通过点击手势移动.*/var selectedObject:虚拟对象?///跟踪以供平移和旋转手势使用的对象.私有 var 跟踪对象:VirtualObject?{已设置{守卫trackedObject != nil else { return }selectedObject = 跟踪对象}}///假设检测到的平面无限延伸,开发人员设置进行平移.让 translateAssumingInfinitePlane = true//*** 对于对象拖动平移手势 - 苹果 ***

在 viewDidLoad 中,在设置场景之前添加以下代码:

//*** 对于对象拖动平移手势 - Apple ***让 panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))panGesture.delegate = self//向 `sceneView` 添加手势.sceneView.addGestureRecognizer(panGesture)//*** 对于对象拖动平移手势 - 苹果 ***

在 ViewController.swift 的最后添加以下代码:

//MARK: - 平移手势块//*** 对于对象拖动平移手势 - 苹果 ***@对象func didPan(_ 手势:ThresholdPanGesture){切换手势状态{案例.开始://检查与新对象的交互.if let object = objectInteracting(with:gesture, in:sceneView){trackedObject = object//as?虚拟对象}case .changed wheregesture.isThresholdExceeded:守卫让对象=被跟踪的对象其他{返回}让翻译 = 手势.translation(在:sceneView)让 currentPosition = currentTrackingPosition ??CGPoint(sceneView.projectPoint(object.position))//`currentTrackingPosition` 用于更新 `updateObjectToCurrentTrackingPosition()` 中的 `selectedObject`.currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)手势.setTranslation(.zero, in:sceneView)案例.改变://忽略对平移手势的更改,直到超过位移阈值.休息案例结束://当手势结束时更新对象的锚点.守卫让 existingTrackedObject = trackedObject else { break }addOrUpdateAnchor(for: existingTrackedObject)落空默认://清除当前位置跟踪.currentTrackingPosition = nil跟踪对象 = 零}}//- MARK: 对象锚点///- 标签:AddOrUpdateAnchorfunc addOrUpdateAnchor(对于对象:VirtualObject){//如果锚点不为零,则将其从会话中删除.如果让锚= object.anchor {sceneView.session.remove(锚点:锚点)}//使用对象的当前变换创建一个新锚点并将其添加到会话中让 newAnchor = ARAnchor(transform: object.simdWorldTransform)object.anchor = newAnchorSceneView.session.add(锚点:新锚点)}private func objectInteracting(带手势:UIGestureRecognizer,视图中:ARSCNView)->虚拟对象?{对于 0..虚拟对象?{//让 hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]让 hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])//让 hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]//让 hitTestResults = sceneView.hitTest(point, options: hitTestOptions)返回 hitTestResults.lazy.compactMap { 结果在返回 VirtualObject.existingObjectContainingNode(result.node)}.第一的}/**如果正在进行拖动手势,则通过以下方式更新跟踪对象的位置将屏幕上的 2D 触摸位置(`currentTrackingPosition`)转换为3D 世界空间.此方法每帧调用一次(通过`SCNSceneRendererDelegate` 回调),允许拖动手势移动虚拟对象,无论是否存在在屏幕上拖动手指或在空间中移动设备.- 标签:updateObjectToCurrentTrackingPosition*/@对象func updateObjectToCurrentTrackingPosition() {守卫让对象 = trackedObject,让位置 = currentTrackingPosition else { return }翻译(对象,基于:位置,infinitePlane:translateAssumingInfinitePlane,allowAnimation:true)}///- 标签:DragVirtualObjectfunc翻译(_对象:VirtualObject,基于screenPos:CGPoint,infinitePlane:Bool,allowAnimation:Bool){守卫让cameraTransform = sceneView.session.currentFrame?.camera.transform,让结果 = smartHitTest(screenPos,无限平面:无限平面,objectPosition: object.simdWorldPosition,allowedAlignments: [ARPlaneAnchor.Alignment.horizo​​ntal]) else { return }让平面对齐:ARPlaneAnchor.Alignmentif let planeAnchor = result.anchor as?ARPlaneAnchor {planeAlignment = planeAnchor.alignment} else if result.type == .estimatedHorizo​​ntalPlane {平面对齐 = .horizo​​ntal} else if result.type == .estimatedVerticalPlane {平面对齐 = .vertical} 别的 {返回}/*飞机撞击测试结果一般都很顺利.如果我们没有*没有*撞到飞机,平滑运动以防止大跳跃.*/让变换 = result.worldTransform让 isOnPlane = result.anchor 是 ARPlaneAnchorobject.setTransform(转换,相对:cameraTransform,平滑移动:!isOnPlane,对齐:平面对齐,允许动画:允许动画)}//*** 对于对象拖动平移手势 - 苹果 ***

添加一些焦点方块代码

//MARK: - Focus Square(Apple 的代码,我的一些代码)func updateFocusSquare(isObjectVisible: Bool) {如果是对象可见{focusSquare.hide()} 别的 {focusSquare.unhide()}//仅当 ARKit 跟踪处于良好状态时才执行命中测试.if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,让结果 = smartHitTest(screenCenter) {DispatchQueue.main.async {self.sceneView.scene.rootNode.addChildNode(self.focusSquare)self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)}} 别的 {DispatchQueue.main.async {self.focusSquare.state = .initializingself.sceneView.pointOfView?.addChildNode(self.focusSquare)}}}

并添加一些控制功能:

func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }//隐藏焦点方块func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } }//显示焦点方块

从 VirtualObjectARView.swift 复制!整个函数 smartHitTest 到 ViewController.swift (所以它们存在两次)

func smartHitTest(_ point: CGPoint,无限平面:布尔=假,对象位置:float3?= 零,allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizo​​ntal, .vertical]) ->ARHitTest结果?{//执行命中测试.让结果 = sceneView.hitTest(点,类型:[.existingPlaneUsingGeometry,.estimatedVerticalPlane,.estimatedHorizo​​ntalPlane])//1. 使用几何检查现有平面上的结果.如果让 existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),让planeAnchor = existingPlaneUsingGeometryResult.anchor as?ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {返回 existingPlaneUsingGeometryResult}如果无限平面{//2. 检查现有平面上的结果,假设其维度是无限的.//遍历对无限现有平面的所有命中并返回//最近的一个(垂直平面)或返回 5 厘米内最近的一个//对象的位置.让infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)对于infinitePlaneResults中的infinitePlaneResult {if let planeAnchor = infinitePlaneResult.anchor as?ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {如果planeAnchor.alignment == .vertical {//返回第一个垂直平面命中测试结果.返回无限平面结果} 别的 {//对于水平面,我们只想返回一个命中测试结果//如果它靠近当前对象的位置.如果让 objectY = objectPosition?.y {让planeY = infinitePlaneResult.worldTransform.translation.y如果对象Y >平面Y - 0.05 &&对象Y平面Y + 0.05 {返回无限平面结果}} 别的 {返回无限平面结果}}}}}//3. 作为最后的回退,检查估计平面上的结果.让 vResult = results.first(其中:{ $0.type == .estimatedVerticalPlane })让 hResult = results.first(其中:{ $0.type == .estimatedHorizo​​ntalPlane })开关 (allowedAlignments.contains(.horizo​​ntal), allowedAlignments.contains(.vertical)) {案例(真,假):返回结果案例(假,真)://允许回退到水平,因为我们假设对象是垂直放置的//(就像图片一样)也可以始终放置在水平表面上.返回 vResult ??结果案例(真,真):如果 hResult != nil &&vResult != nil {返回 hResult!.distance <vResult!.distance ?结果!:结果!} 别的 {返回 hResult ??结果}默认:返回零}}

您可能会在复制的函数中看到一些关于 hitTest 的错误.像这样纠正它:

hitTest...//给出错误sceneView.hitTest...//这应该更正它

实现渲染器 updateAtTime 函数并添加以下几行:

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {//对于焦点方块if isFocusSquareEnabled { showFocusSquare() }self.updateObjectToCurrentTrackingPosition()//*** 对象拖动平移手势 - 苹果 ***}

最后为Focus Square添加一些辅助函数

func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }//隐藏焦点方块func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } }//显示焦点方块

此时您可能仍会在导入的文件中看到大约十几个错误和警告,这可能会发生,在 Swift 5 中执行此操作并且您有一些 Swift 4 文件时.只需让 Xcode 更正错误即可.(这都是关于重命名一些代码语句,Xcode 最了解)

进入 VirtualObject.swift 并搜索此代码块:

if smoothMovement {让 hitTestResultDistance = simd_length(positionOffsetFromCamera)//添加最新位置并保持最多 10 个最近距离以进行平滑处理.最近的VirtualObjectDistances.append(hitTestResultDistance)最近的VirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))让 averageDistance = recentVirtualObjectDistances.average!让 averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistancesimdPosition = cameraWorldPosition + averagedDistancePosition} 别的 {simdPosition = cameraWorldPosition + positionOffsetFromCamera}

用这行代码注释或替换整个块:

simdPosition = cameraWorldPosition + positionOffsetFromCamera

此时您应该能够编译项目并在设备上运行它.您应该会看到宇宙飞船和一个应该已经可以工作的黄色焦点方块.

要开始放置一个可以拖动的对象,您需要一些函数来创建我在开头所说的所谓的 VirtualObject.

使用此示例函数进行测试(将其添加到视图控制器的某处):

override func touchesEnded(_ touches: Set, with event: UIEvent?) {如果 focusSquare.state != .initializing {让位置 = SCNVector3(focusSquare.lastPosition!)//*** 对于对象拖动平移手势 - 苹果 ***let testObject = VirtualObject()//给它一个名字,当你没有任何东西要加载时testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.redtestObject.categoryBitMask = 0b00000010testObject.name = "测试"testObject.castsShadow = truetestObject.position = 位置SceneView.scene.rootNode.addChildNode(testObject)}}

注意:您想在平面上拖动的所有内容都必须使用 VirtualObject() 而不是 SCNNode() 进行设置.关于 VirtualObject 的其他一切都与 SCNNode 保持一致

(您还可以添加一些常见的 SCNNode 扩展,例如按名称加载场景的扩展 - 在引用导入模型时很有用)

玩得开心!

I've been reading plenty of *** answers on how to move an object by dragging it across the screen. Some use hit tests against .featurePoints some use the gesture translation or just keeping track of the lastPosition of the object. But honestly.. none work the way everyone is expecting it to work.

Hit testing against .featurePoints just makes the object jump all around, because you dont always hit a featurepoint when dragging your finger. I dont understand why everyone keeps suggesting this.

Solutions like this one work: Dragging SCNNode in ARKit Using SceneKit

But the object doesnt really follow your finger, and the moment you take a few steps or change the angle of the object or the camera.. and try to move the object.. the x,z are all inverted.. and makes total sense to do that.

I really want to move objects as good as the Apple Demo, but I look at the code from Apple... and is insanely weird and overcomplicated I cant even understand a bit. Their technique to move the object so beautifly is not even close to what everyone propose online. https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality

There's gotta be a simpler way to do it.

Short answer: To get this nice and fluent dragging effect like in the Apple demo project, you will have to do it like in the Apple demo project (Handling 3D Interaction). On the other side I agree with you, that the code might be confusing if you look at it for the first time. It is not easy at all to calculate the correct movement for an object placed on a floor plane - always and from every location or viewing angle. It’s a complex code construct, that is doing this superb dragging effect. Apple did a great job to achieve this, but didn’t make it too easy for us.

Full Answer: Striping down the AR Interaction template for your needy results in a nightmare - but should work too if you invest enough time. If you prefer to begin from scratch, basically start using a common swift ARKit/SceneKit Xcode template (the one containing the space ship).

You will also require the entire AR Interaction Template Project from Apple. (The link is included in the SO question) At the End you should be able to drag something called VirtualObject, which is in fact a special SCNNode. In Addition you will have a nice Focus Square, that can be useful for whatever purpose - like initially placing objects or adding a floor, or a wall. (Some code for the dragging effect and the focus square usage are kind of merged or linked together - doing it without the focus square will actually be more complicated)

Get started: Copy the following files from the AR Interaction template to your empty project:

  • Utilities.swift (usually I name this file Extensions.swift, it contains some basic extensions that are required)
  • FocusSquare.swift
  • FocusSquareSegment.swift
  • ThresholdPanGesture.swift
  • VirtualObject.swift
  • VirtualObjectLoader.swift
  • VirtualObjectARView.swift

Add the UIGestureRecognizerDelegate to the ViewController class definition like so:

class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {

Add this code to your ViewController.swift, in the definitions section, right before viewDidLoad:

// MARK: for the Focus Square
// SUPER IMPORTANT: the screenCenter must be defined this way
var focusSquare = FocusSquare()
var screenCenter: CGPoint {
    let bounds = sceneView.bounds
    return CGPoint(x: bounds.midX, y: bounds.midY)
}
var isFocusSquareEnabled : Bool = true


// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
/// The tracked screen position used to update the `trackedObject`'s position in `updateObjectToCurrentTrackingPosition()`.
private var currentTrackingPosition: CGPoint?

/**
 The object that has been most recently intereacted with.
 The `selectedObject` can be moved at any time with the tap gesture.
 */
var selectedObject: VirtualObject?

/// The object that is tracked for use by the pan and rotation gestures.
private var trackedObject: VirtualObject? {
    didSet {
        guard trackedObject != nil else { return }
        selectedObject = trackedObject
    }
}

/// Developer setting to translate assuming the detected plane extends infinitely.
let translateAssumingInfinitePlane = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

In viewDidLoad, before you setup the scene add this code:

// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))
panGesture.delegate = self

// Add gestures to the `sceneView`.
sceneView.addGestureRecognizer(panGesture)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

At the very end of your ViewController.swift add this code:

// MARK: - Pan Gesture Block
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
@objc
func didPan(_ gesture: ThresholdPanGesture) {
    switch gesture.state {
    case .began:
        // Check for interaction with a new object.
        if let object = objectInteracting(with: gesture, in: sceneView) {
            trackedObject = object // as? VirtualObject
        }

    case .changed where gesture.isThresholdExceeded:
        guard let object = trackedObject else { return }
        let translation = gesture.translation(in: sceneView)

        let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))

        // The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.
        currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)

        gesture.setTranslation(.zero, in: sceneView)

    case .changed:
        // Ignore changes to the pan gesture until the threshold for displacment has been exceeded.
        break

    case .ended:
        // Update the object's anchor when the gesture ended.
        guard let existingTrackedObject = trackedObject else { break }
        addOrUpdateAnchor(for: existingTrackedObject)
        fallthrough

    default:
        // Clear the current position tracking.
        currentTrackingPosition = nil
        trackedObject = nil
    }
}

// - MARK: Object anchors
/// - Tag: AddOrUpdateAnchor
func addOrUpdateAnchor(for object: VirtualObject) {
    // If the anchor is not nil, remove it from the session.
    if let anchor = object.anchor {
        sceneView.session.remove(anchor: anchor)
    }

    // Create a new anchor with the object's current transform and add it to the session
    let newAnchor = ARAnchor(transform: object.simdWorldTransform)
    object.anchor = newAnchor
    sceneView.session.add(anchor: newAnchor)
}


private func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {
    for index in 0..<gesture.numberOfTouches {
        let touchLocation = gesture.location(ofTouch: index, in: view)

        // Look for an object directly under the `touchLocation`.
        if let object = virtualObject(at: touchLocation) {
            return object
        }
    }

    // As a last resort look for an object under the center of the touches.
    // return virtualObject(at: gesture.center(in: view))
    return virtualObject(at: (gesture.view?.center)!)
}


/// Hit tests against the `sceneView` to find an object at the provided point.
func virtualObject(at point: CGPoint) -> VirtualObject? {

    // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
    let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
    // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
    // let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)

    return hitTestResults.lazy.compactMap { result in
        return VirtualObject.existingObjectContainingNode(result.node)
        }.first
}

/**
 If a drag gesture is in progress, update the tracked object's position by
 converting the 2D touch location on screen (`currentTrackingPosition`) to
 3D world space.
 This method is called per frame (via `SCNSceneRendererDelegate` callbacks),
 allowing drag gestures to move virtual objects regardless of whether one
 drags a finger across the screen or moves the device through space.
 - Tag: updateObjectToCurrentTrackingPosition
 */
@objc
func updateObjectToCurrentTrackingPosition() {
    guard let object = trackedObject, let position = currentTrackingPosition else { return }
    translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)
}

/// - Tag: DragVirtualObject
func translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {
    guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,
        let result = smartHitTest(screenPos,
                                  infinitePlane: infinitePlane,
                                  objectPosition: object.simdWorldPosition,
                                  allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }

    let planeAlignment: ARPlaneAnchor.Alignment
    if let planeAnchor = result.anchor as? ARPlaneAnchor {
        planeAlignment = planeAnchor.alignment
    } else if result.type == .estimatedHorizontalPlane {
        planeAlignment = .horizontal
    } else if result.type == .estimatedVerticalPlane {
        planeAlignment = .vertical
    } else {
        return
    }

    /*
     Plane hit test results are generally smooth. If we did *not* hit a plane,
     smooth the movement to prevent large jumps.
     */
    let transform = result.worldTransform
    let isOnPlane = result.anchor is ARPlaneAnchor
    object.setTransform(transform,
                        relativeTo: cameraTransform,
                        smoothMovement: !isOnPlane,
                        alignment: planeAlignment,
                        allowAnimation: allowAnimation)
}
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

Add some Focus Square Code

// MARK: - Focus Square (code by Apple, some by me)
func updateFocusSquare(isObjectVisible: Bool) {
    if isObjectVisible {
        focusSquare.hide()
    } else {
        focusSquare.unhide()
    }

    // Perform hit testing only when ARKit tracking is in a good state.
    if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,
        let result = smartHitTest(screenCenter) {
        DispatchQueue.main.async {
            self.sceneView.scene.rootNode.addChildNode(self.focusSquare)
            self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)
        }
    } else {
        DispatchQueue.main.async {
            self.focusSquare.state = .initializing
            self.sceneView.pointOfView?.addChildNode(self.focusSquare)
        }
    }
}

And add some control Functions:

func hideFocusSquare()  { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }  // to hide the focus square
func showFocusSquare()  { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square

From the VirtualObjectARView.swift COPY! the entire function smartHitTest to the ViewController.swift (so they exist twice)

func smartHitTest(_ point: CGPoint,
                  infinitePlane: Bool = false,
                  objectPosition: float3? = nil,
                  allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {

    // Perform the hit test.
    let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])

    // 1. Check for a result on an existing plane using geometry.
    if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),
        let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
        return existingPlaneUsingGeometryResult
    }

    if infinitePlane {

        // 2. Check for a result on an existing plane, assuming its dimensions are infinite.
        //    Loop through all hits against infinite existing planes and either return the
        //    nearest one (vertical planes) or return the nearest one which is within 5 cm
        //    of the object's position.
        let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)

        for infinitePlaneResult in infinitePlaneResults {
            if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
                if planeAnchor.alignment == .vertical {
                    // Return the first vertical plane hit test result.
                    return infinitePlaneResult
                } else {
                    // For horizontal planes we only want to return a hit test result
                    // if it is close to the current object's position.
                    if let objectY = objectPosition?.y {
                        let planeY = infinitePlaneResult.worldTransform.translation.y
                        if objectY > planeY - 0.05 && objectY < planeY + 0.05 {
                            return infinitePlaneResult
                        }
                    } else {
                        return infinitePlaneResult
                    }
                }
            }
        }
    }

    // 3. As a final fallback, check for a result on estimated planes.
    let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })
    let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })
    switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {
    case (true, false):
        return hResult
    case (false, true):
        // Allow fallback to horizontal because we assume that objects meant for vertical placement
        // (like a picture) can always be placed on a horizontal surface, too.
        return vResult ?? hResult
    case (true, true):
        if hResult != nil && vResult != nil {
            return hResult!.distance < vResult!.distance ? hResult! : vResult!
        } else {
            return hResult ?? vResult
        }
    default:
        return nil
    }
}

You might see some errors in the copied function regarding the hitTest. Just correct it like so:

hitTest... // which gives an Error
sceneView.hitTest... // this should correct it

Implement the renderer updateAtTime function and add this lines:

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
    // For the Focus Square
    if isFocusSquareEnabled { showFocusSquare() }

    self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
}

And finally add some helper functions for the Focus Square

func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }  // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square

At this point you might still see about a dozen errors and warnings in the imported files, this might occur, when doing this in Swift 5 and you have some Swift 4 files. Just let Xcode correct the errors. (Its all about renaming some code statements, Xcode knows best)

Go in VirtualObject.swift and search for this code block:

if smoothMovement {
    let hitTestResultDistance = simd_length(positionOffsetFromCamera)

    // Add the latest position and keep up to 10 recent distances to smooth with.
    recentVirtualObjectDistances.append(hitTestResultDistance)
    recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))

    let averageDistance = recentVirtualObjectDistances.average!
    let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance
    simdPosition = cameraWorldPosition + averagedDistancePosition
} else {
    simdPosition = cameraWorldPosition + positionOffsetFromCamera
}

Outcomment or replace this entire block by this single line of code:

simdPosition = cameraWorldPosition + positionOffsetFromCamera

At this point you should be able to compile the project and run it on a device. You should see the Spaceship and a yellow focus square that should already work.

To start placing an Object, that you can drag you need some function to create a so called VirtualObject as I said in the beginning.

Use this example function to test (add it somewhere in the view controller):

override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {

    if focusSquare.state != .initializing {
        let position = SCNVector3(focusSquare.lastPosition!)

        // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
        let testObject = VirtualObject() // give it some name, when you dont have anything to load
        testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)
        testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red
        testObject.categoryBitMask = 0b00000010
        testObject.name = "test"
        testObject.castsShadow = true
        testObject.position = position

        sceneView.scene.rootNode.addChildNode(testObject)
    }
}

Note: everything you want to drag on a plane, must be setup using VirtualObject() instead of SCNNode(). Everything else regarding the VirtualObject stays the same as SCNNode

(You can also add some common SCNNode extensions as well, like the one to load scenes by its name - useful when referencing imported models)

Have fun!