Hello everyone,
I’m working on an iOS app that fetches videos from the "Recently Deleted" album using the Photos framework in Swift. However, I’m unable to fetch any videos, even though the "Recently Deleted" album contains 233 items (including videos), as seen in the Photos app.
Environment:
iOS Version: 18.3.1
Xcode Version: 16.2
Swift Version: Swift 5
Device: iPhone (simulator and physical device both tested)
Photo Library Permission: "All Photos" access granted
Recently Deleted Lock: Face ID/Passcode is disabled for "Recently Deleted"
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello everyone,
I am looking for a solution to programmatically, e.g. using AppleScript to import photos into the Photos library on MacOS and also push them to the shared library, like it can be done using the standard GUI of the Photos application.
Maybe it is not possible using AppleScript, but using a short Swift script and PhotoKit, I do not not know.
Any help is appreciated!
Thomas
If I want to edit image in preview app. But there is only option to rotate left and right 90degree rotations. No option to rorate in any prticular angle. So Please look into this and provide option in next update
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Graphics and Games
App Review
Media
This is an issue with the Insta360 Flow Pro 2.
My iOS app uses DockKit to control the gimbal; in particular, my app disables tracking and sends angular velocity commands to control the gimbal's orientation. I only try to modify the yaw (rotation around the vertical axis); never the pitch or yaw. Note that I don't send the gimbal to a particular orientation directly; I modify the velocity.
Everything works great for a long period of time: typically for a continuous run of 4-6 hours; in the most recent case, I managed about 36 hours of continous operation before the following problem occurred.
I came back to check on the system, and because no visual activity had occurred in the camera's field of view for a while, the phone had commanded the gimbal to rotate back to a yaw angle of 0 degrees.
So the phone in the gimbal should have been looking straight ahead (i.e. the 0 degree yaw position), but it was definitely looking off at an angle. I've seen this twice now. The first time, when it should have been looking straight ahead, it was in fact looking 60 degrees off center. This time (caught on video, see below), it was off by 22 degrees from center.
Here's the weird part: the gimbal reports this way off center positioning as zero degrees (well close enough to zero, like 0.2 or something that's fine). But, mechanically, the gimbal still knows where zero degrees is: if we double click on the trigger of the Flow Pro 2, which is supposed to reset the gimbal to 0 degrees yaw and pitch, the gimbal responds correctly and reorients to a 0 degree position. However, the yaw values it reports are not zero, but as shown in my video, 22 degrees off axis or so.
Power cycling the gimbal and restarting immediately fixes the problem. Also, I switched from my app to the Insta360 app, which caused the phone to flip from landscape to portrait, then when I returned to my app and switched back to landscape, the gimbal now started reporting correct yaw angles.
Is there a possibility this is a bug in the DockKit framework? Has anyone seen this? I have a case open with Insta360, but although it's clearly a software issue, it's not clear if it's in Insta360's code or the DockKit layer. Any ideas for how I can get out of this mode? My concern is that the phone is in a tripod about 10' off the floor, and not very accessible. Also, if all goes well, we may have about 50 of these systems running, and having to fix them one by one after a few hours is not good.
For a demonstration of this bug, see the following video:
https://octoparry.com/offset.MOV
Any help greatly appreciated.
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds). I am sure they are correct.
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds).
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description
Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
I have a complex CoreImage pipeline which I'm keen to optimise. I'm aware that calling back to the CPU can have a significant impact on the performance - what is the best way to find out where this is happening?
Hey,
I have a camera app that captures a ProRaw photo and then runs a few Core Image filters before saving it to the device as a HEIC. However I'm finding that capturing at 48MP is rather slow. Testing a minimal pipeline on an iPhone 16 Pro:
Shutter press => file received in output: 1.2 ~ 1.6s
CIRawFilter created using photo file representation then rendered to context, without any filters: 0.8s ~ 1s
Saving to device ~0.15s
Is this the expected time for capturing processing? The native camera app seems to save the images within half a second. I'm using QualityPrioritization.balanced and the highest resolution available which is 48MP.
Would using the CIRawFilter with the pixelBuffer from the photo output be faster? I tried it but couldn't get it to output an image. Are there any other things I could try to speed this up? Is it possible to capture at 24MP instead?
Thanks,
Alex
I’m developing a hybrid app (WebView / Turbo Native) that uses getUserMedia to access the back camera for a PPG/heart rate measurement feature (the user places their finger on the camera).
Problem: Even when I specify constraints like:
{
video: {
deviceId: '...',
facingMode: { exact: 'environment' },
advanced: [{ zoom: 1.0 }]
},
audio: false
}
On iPhone 15 (iOS 18), iOS unexpectedly switches between the wide, ultra-wide, and telephoto lenses during the measurement.
This breaks the heart rate detection, and it forces the user to move their finger in the middle of the measurement.
Question: Is there any way, via getUserMedia/WebRTC, to force iOS to use only the wide-angle lens and prevent automatic lens switching?
I know that with AVFoundation (Swift) you can pick .builtInWideAngleCamera, but I’m hoping to avoid building a custom native layer and would prefer to stick with WebView/JavaScript if possible to save time and complexity.
Any suggestions, workarounds, or updates from Apple would be greatly appreciated!
Thanks a lot!
I am following the Apple sample code and trying to add a manual focus lens position slider:
@available(iOS 18.0, *)
private func addCameraControls() {
if !self.session.controls.isEmpty {
for control in self.session.controls {
self.session.removeControl(control)
}
}
self.cameraControlFocusSlider = nil
//Focus Slider
if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported {
self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0)
self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in
//Do manual focus
}
if self.session.canAddControl(self.cameraControlFocusSlider!) {
self.session.addControl(self.cameraControlFocusSlider!)
}
}
}
So there are these AVCaptureSessionControlsDelegate methods:
final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeActive")
}
final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillEnterFullscreenAppearance")
}
final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillExitFullscreenAppearance")
}
final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeInactive")
}
So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used?
Please note that I will have more than one AVCaptureSlider in the final code.
Hi
This is one of our top crashes. It does not contain any of our code in the stacktrace and we can't reproduce it. Those points make this crash very hard to understand and fix. We know that most of the crashes are happening on iPhone 13 with iOS 18.x.x. Also we see that a lot of cases happen when app goes into background (stacktrace contains -[UIApplication _applicationDidEnterBackground]).
2025-03-04_16-06-00.3670_-0500-6a273c7d5da97f098b5cc24898bb9761dc45208e.crash
2025-03-04_20-21-08.6609_-0500-2c08f640900f8a62c4f8a4f6f2a61feb052e66dd.crash
2025-03-04_20-46-27.7138_+0000-4d7ea89b1b564eda22ca63e708f7ad3909c7b768.crash
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Swift Packages
App Clips
Developer Tools
iOS
Is it possible to use the AVExternalStorageDevice to access external storage from a connected camera or usb drive (via USB C or Lightning connector) on an iPad/iPhone.
I have tested the following code on an iPhone 14 (iOS 18.1.1) and an iPad Gen 10 (18.3.1), and both return false for:
// returns false on iPhone 14, iPad gen 10
print(AVExternalStorageDeviceDiscoverySession.isSupported)
The following code returns null, when I try to access the external storage discovery session.
// returns null on iOS devices
print(AVExternalStorageDeviceDiscoverySession.shared)
The following returns false, without displaying a permission dialog:
AVExternalStorageDevice.requestAccess(completionHandler: { (granted: Bool) in
// returns false with no permission dialog
print(granted);
What type of iOS devices are supported by AVExternalStorageDeviceDiscoverySession?
What situations has it been used for (e.g. connecting to Camera via the external storage protocol, accessing photos from a SD card with an adapter, accessing photos from usb drive).
Is there are sample code for using the AV External Storage api?
Hi,
Currently I am developing a 3D reconstruction project.
Which requires images to be distortion-free (rectilinear) and with known intrinsics.
The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data.
The distortion correction parameters such as lensDistortionLookupTable are used.
The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation.
There are two questions I would like to get support on:
A way to set the calibration parameters in each image.
I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach.
For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array.
For example (last 10 elements are zero):
"LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000"
The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong.
In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted.
Including zeros (full array):
Excluding zeros (array size changed):
Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)?
What is the criteria to shrink/exclude elements of the array?
Any advice is very much welcome.
I am writing an iOS app to present a slide show of assets in a Photo album, in a random order, including videos and live photos. I have got it all working quite nicely but for a Live Photo, I need to know what effect is selected (Live, Loop, Bounce, Long Exposure, Live Off) to display the image correctly. I can't find any mention of getting this information in the documentation. Anyone know how to do this? Thanks in advance.
Adrian.
(Xcode 16.1 iOS 18.0)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi,
I'm using Core Graphics to load a .DNG photo shot by a Leica Q3 camera.
The photo is shot in portrait, however the embedded preview is rotated 90 degrees to landscape.
I load the photo like this:
let options = [kCGImageSourceDecodeRequest: kCGImageSourceDecodeToHDR] as CFDictionary
let source = CGImageSourceCreateWithData(data as CFData, nil)
let cgimage = CGImageSourceCreateImageAtIndex(source, 0, options)
let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [CFString : Any]
When doing this I can see that the orientation property is 1 indicating that the orientation is 'Up', which it isn't.
If I don't specify the kCGImageSourceDecodeToHDR option (eseentially setting options to nil) - the orientation property is 8 (rotated 90 degrees).
What puzzles me is that a chang to the CGImageSourceCreateImageAtIndex call can have an influence on that latter call to CGImageSourceCopyPropertiesAtIndex ?
I would expect these to work independently?
Cheers
Thomas
I set the device format and colorspace to Apple Log and turn off the HDR, why the movie output is still in HDR format rather than ProRes Log?
Full runnable demo here:
https://github.com/SpaceGrey/ColorSpaceDemo
session.sessionPreset = .inputPriority
// get the back camera
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
backCamera = deviceDiscoverySession.devices.first!
try! backCamera.lockForConfiguration()
backCamera.automaticallyAdjustsVideoHDREnabled = false
backCamera.isVideoHDREnabled = false
let formats = backCamera.formats
let appleLogFormat = formats.first { format in
format.supportedColorSpaces.contains(.appleLog)
}
print(appleLogFormat!.supportedColorSpaces.contains(.appleLog))
backCamera.activeFormat = appleLogFormat!
backCamera.activeColorSpace = .appleLog
print("colorspace is Apple Log \(backCamera.activeColorSpace == .appleLog)")
backCamera.unlockForConfiguration()
do {
let input = try AVCaptureDeviceInput(device: backCamera)
session.addInput(input)
} catch {
print(error.localizedDescription)
}
// add output
output = AVCaptureMovieFileOutput()
session.addOutput(output)
let connection = output.connection(with: .video)!
print(
output.outputSettings(for: connection)
)
/*
["AVVideoWidthKey": 1920, "AVVideoHeightKey": 1080, "AVVideoCodecKey": apch,<----- prores has enabled.
"AVVideoCompressionPropertiesKey": {
AverageBitRate = 220029696;
ExpectedFrameRate = 30;
PrepareEncodedSampleBuffersForPaddedWrites = 1;
PrioritizeEncodingSpeedOverQuality = 0;
RealTime = 1;
}]
*/
previewSource = DefaultPreviewSource(session: session)
queue.async {
self.session.startRunning()
}
}
I want to modify the photo's exif information, which means putting the original image in through CGImageDestinationAddImageFromSource, and then adding the modified exif information to a place called properties.
Here is the complete process:
static func setImageMetadata(asset: PHAsset, exif: [String: Any]) {
let options = PHContentEditingInputRequestOptions()
options.canHandleAdjustmentData = { (adjustmentData) -> Bool in
return true
}
asset.requestContentEditingInput(with: options, completionHandler: { input, map in
guard let inputNN = input else {
return
}
guard let url = inputNN.fullSizeImageURL else {
return
}
let output = PHContentEditingOutput(contentEditingInput: inputNN)
let adjustmentData = PHAdjustmentData(formatIdentifier: AppInfo.appBundleId(), formatVersion: AppInfo.appVersion(), data: Data())
output.adjustmentData = adjustmentData
let outputURL = output.renderedContentURL
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else {
return
}
guard let dest = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.jpeg.identifier as CFString, 1, nil) else {
return
}
CGImageDestinationAddImageFromSource(dest, source, 0, exif as CFDictionary)
let d = CGImageDestinationFinalize(dest)
// d is true, and I checked the content of outputURL, image has been write correctly, it could be convert to UIImage and image is ok.
PHPhotoLibrary.shared().performChanges {
let changeReq = PHAssetChangeRequest(for: asset)
changeReq.contentEditingOutput = output
} completionHandler: { succ, err in
if !succ {
print(err) // 3303 here, always!
}
}
})
}
Hello, my company is developing a product that will send data to/from the phone over cable and Wi-Fi. I have three questions:
Do we need an MFi authentication chip in our product if we plan to send video and commands to the iPhone/iPad over USB or Lightning cable?
Likewise, do we need an MFI authentication chip for communication over Wi-Fi? (Informal research suggests that the answer is no to this one.)
And, do we even still need MFI certification at all for Wi-Fi comms? (We are not using HomeKit.)
Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am trying to recreate the iOS Messages app photo selection UI, where a PHPickerViewController is displayed half screen, with the message text field and a scrolling photo viewer on top. I have that UI mostly working, but cannot figure out how to allow a user to remove an image from my scrolling photo viewer (just like in the iOS Messages app).
When the picker is initially displayed, I can show selected images using the preselectedAssetIdentifiers. However, if the user taps the "x" to remove an image from the scrolling photo viewer, there is no way that I have found to update that selection in the picker.
I can dismiss/show a new picker with the animated property set to false, but that creates a very apparent bounce in the screen. Are there any ways I am missing to accomplish this?
Here is what I have so far: