Giter Club home page Giter Club logo

wescan's Introduction

WeScan

WeScan makes it easy to add scanning functionalities to your iOS app! It's modelled after UIImagePickerController, which makes it a breeze to use.

Features

  • Fast and lightweight
  • Live scanning of documents
  • Edit detected rectangle
  • Auto scan and flash support
  • Support for both PDF and UIImage
  • Translated to English, Chinese, Italian, Portuguese, and French
  • Batch scanning

Demo

Requirements

  • Swift 5.0
  • iOS 10.0+

Installation

Swift Package Manager

The Swift Package Manager is a tool for automating the distribution of Swift code and is integrated into the swift compiler. It is in early development, but WeScan does support its use on supported platforms.

Once you have your Swift package set up, adding WeScan as a dependency is as easy as adding it to the dependencies value of your Package.swift.

dependencies: [
    .package(url: "https://github.com/WeTransfer/WeScan.git", .upToNextMajor(from: "2.1.0"))
]

Usage

Swift

  1. In order to make the framework available, add import WeScan at the top of the Swift source file

  2. In the Info.plist, add the NSCameraUsageDescription key and set the appropriate value in which you have to inform the user of the reason to allow the camera permission

  3. Make sure that your view controller conforms to the ImageScannerControllerDelegate protocol:

class YourViewController: UIViewController, ImageScannerControllerDelegate {
    // YourViewController code here
}
  1. Implement the delegate functions inside your view controller:
func imageScannerController(_ scanner: ImageScannerController, didFailWithError error: Error) {
    // You are responsible for carefully handling the error
    print(error)
}

func imageScannerController(_ scanner: ImageScannerController, didFinishScanningWithResults results: ImageScannerResults) {
    // The user successfully scanned an image, which is available in the ImageScannerResults
    // You are responsible for dismissing the ImageScannerController
    scanner.dismiss(animated: true)
}

func imageScannerControllerDidCancel(_ scanner: ImageScannerController) {
    // The user tapped 'Cancel' on the scanner
    // You are responsible for dismissing the ImageScannerController
    scanner.dismiss(animated: true)
}
  1. Finally, create and present a ImageScannerController instance somewhere within your view controller:
let scannerViewController = ImageScannerController()
scannerViewController.imageScannerDelegate = self
present(scannerViewController, animated: true)

Objective-C

  1. Create a dummy swift class in your project. When Xcode asks if you'd like to create a bridging header, press 'Create Bridging Header'
  2. In the new header, add the Objective-C class (#import myClass.h) where you want to use WeScan
  3. In your class, import the header (import <yourProjectName.swift.h>)
  4. Drag and drop the WeScan folder to add it to your project
  5. In your class, add @Class ImageScannerController;

Example Implementation

ImageScannerController *scannerViewController = [[ImageScannerController alloc] init];
[self presentViewController:scannerViewController animated:YES completion:nil];

Contributing

As the creators, and maintainers of this project, we're glad to invite contributors to help us stay up to date. Please take a moment to review the contributing document in order to make the contribution process easy and effective for everyone involved.

  • If you found a bug, open an issue.
  • If you have a feature request, open an issue.
  • If you want to contribute, submit a pull request.

License

WeScan is available under the MIT license. See the LICENSE file for more info.

wescan's People

Contributors

andschdk avatar applecountdown avatar avdlee avatar basememara avatar basthomas avatar boris-em avatar davidsteppenbeck avatar erikgro avatar hajunho avatar ithhkn avatar jcampbell05 avatar julianschiavo avatar longjianjiang avatar lukszar avatar martingeorgiu avatar marvukusic avatar maxxfrazer avatar mikeperrett avatar mohammadhamdan1991 avatar onlsn avatar padgithub avatar peagasilva avatar permanatayev avatar ppaulojr avatar sclee15 avatar thomasdao avatar valeriyvan avatar wetransferplatform avatar winsonluk avatar yangchen-acc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wescan's Issues

Old Screen Cleaning

Hello, I'm running the application with a button. I'm taking the picture and there's no problem. When I finish my work, click the button to run the application again, the old screen image is coming. Every time I press the button, I want a new screen to be opened. I want the old image erased. How can I do that.

Save as PDF by Default

What is the problem

I think we should save the documents as PDF by default, because:

  • All other scan apps do this (Dropbox, Scanbot).
  • Hardware Scanners do it.
  • People are most likely to "scan" documents.
    • PDF is the standard for scanned documents - there's no clear data on this, but if you read this and take into consideration that all actual scanners using PDF as well, I think there's a good case to be made.
    • Most documents contain text, JPEG makes text unreadable.

Bug when trying to run on Xcode 10, Swift 4.2

When trying to run I get this error:

Showing All Messages
:-1: Multiple commands produce '/Users/oscargorog/Library/Developer/Xcode/DerivedData/Musiker-avnoagqbrsojuteinjszfpuhbrvr/Build/Products/Debug-iphonesimulator/WeScan/WeScan.framework/Info.plist':
1) Target 'WeScan' has copy command from '/Users/oscargorog/Desktop/Oscar's MacBook Pro/Coding/Musiker/Pods/WeScan/WeScan/Info.plist' to '/Users/oscargorog/Library/Developer/Xcode/DerivedData/Musiker-avnoagqbrsojuteinjszfpuhbrvr/Build/Products/Debug-iphonesimulator/WeScan/WeScan.framework/Info.plist'
2) Target 'WeScan' has process command with input '/Users/oscargorog/Desktop/Oscar's MacBook Pro/Coding/Musiker/Pods/Target Support Files/WeScan/Info.plist'

Localizations aren't working when WeScan is used as a pod

I'm trying to make a PR to add Portuguese and Italian localizations to WeScan, but I'm not able to get WeScan working with localizations when it's a pod in another project, as pods are compiled without bundles, so the localizations aren't being found by NSLocalizedString.

I think there's 2 options we could use to solve this issue:

  • Ask the host app to provide localizations (I don't think this is very elegant, but if we do this we have to remove the custom bundle from NSLocalizedString because it's making it impossible at the moment)
  • Find a way to deliver localizations together with WeScan

On the second issue, I've investigated (in https://github.com/justJS/WeScan/tree/private) but haven't been able to figure it out. I'd appreciate if anyone with experience with this can help out?

Rethink the zooming corner

@nickseidel thanks for thinking with us here. I had the same discussion with @Boris-Em before, but he mentioned that we copied the exact same behaviour as the notes iOS app. Although we should handle Apple as a serious example, we might want to revisit this ourselves for better UX.

What can we improve?

We might want to show the zoomed circle in an offset of the touch point, so it's visible in all scenarios. This does mean we step away from the Apple way of doing this, but the question is either way whether users are used to this Apple behaviour and whether we should just do the best for our users instead.

Allow Capture when No Rectangle Detected

Currently, we don't let the user proceed to the Edit screen from the Scanning screen if no rectangle has been detected.

Instead, we should allow users to proceed to the Edit screen, and just pick an arbitrary rectangle that the user can adjust.

Auto-Capture

Detect the document & capture it without pressing camera shutter button.

Add a contribution doc

We should add a contribution doc, for which we can reuse the one from UINotifications.

Rotation Support on Edit Screen

Having the option to rotate the image on the Edit screen, would solve a few edge cases where the orientation of the image isn't correct.

Add Flash Support

Having the option to use the flash of the device would be a great improvement when in low light.

Just a comment for future enhancement!

First of all, great framework!
I just would like to suggest some things that could be great for your nice frame-work and kick ass other similar!
So would be great to add:

topCenter
bottomCenter
leftCenter
rightCenter
(Like the image)
With this, the user could adjust the manual crop quickly if he wants to move the lines top down left right!

Another nice feature would be add option to filter the cropped image as black and white (Use GPUImage to enhance scanned documents) or something similar !

img_0271

`captureOutput(_:didOutput:from:)` not be called.

When i run this demo project at ipad mini2 (11.4), it didn't call AVCaptureVideoDataOutputSampleBufferDelegate delegate method, so the screen is black, not capture any content.

And, i write the same function, it also have same problem when run on ipad mini2(11.4), here is my set CaptureSession code snippet.

  AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    if (!device) return;
    self.captureDevice = device;
    
    _imageDetectionConfidence = 0.0;
    
    AVCaptureSession *session = [AVCaptureSession new];
    session.sessionPreset = AVCaptureSessionPresetPhoto;
    self.captureSession = session;
    
[session beginConfiguration];
   
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:NULL];
    [session addInput:input];
    
    AVCaptureVideoDataOutput *dataOutput = [AVCaptureVideoDataOutput new];
    [dataOutput setAlwaysDiscardsLateVideoFrames:YES];
    [dataOutput setVideoSettings:@{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)}];
    [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
    [session addOutput:dataOutput];
    
    self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    [session addOutput:self.stillImageOutput];
    
    AVCaptureConnection *connection = [dataOutput.connections firstObject];
    [connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
    
    if (device.isFlashAvailable) {
        [device lockForConfiguration:nil];
        [device setFlashMode:AVCaptureFlashModeOff];
        [device unlockForConfiguration];
    }
    
    if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {
        [device lockForConfiguration:nil];
        [device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
        [device unlockForConfiguration];
    }
    
[session commitConfiguration];

I am so confused about this problem, want to get some help from your guys, thanks.

Pod doesn't have the updated code

The example project has a more updated library and has features such as Auto Capture and Magic Wand fixes. When I installed the library using Cocoapods, these features are missing. Could you please ensure that Cocoapod is updated?

Thanks for creating this library, it works just great!

Use Github project boards

We should enable Projects on the repo so we can have a todo, in progress, and done Kanban (trello) style board.

This way we can easily track duplicate/new issues and implementations, and note when we are working on an issue.

Make Scanning View Available

By making the scanning UIView public, we would give users the opportunity to implement their own flow if they don't want to use the one provided by WeScan.

iPhone X layout issues

Looks like the changes in #83 did not work fully as expected. On iPhone X the flash icon is vertically aligned center and the Auto/manual button is vertically aligned in the bottom. Not sure which one of them that is most corrrect, but... Added a screenshot:

img_ead76fef4674-1

Update Gif on README with New Design

The gif used on the README isn't up to date with the way WeScan currently looks like. We should create a new gif showing the same functionality, but using the new design.

Q: I am unable to refer WeScan Project

Hi,
I am trying to download project manually and make some changes.

Manually
Just download the project, and drag and drop the "WeScan" folder in your project.

I downloaded and added to my project
Steps:
1.Add fiels to "MyProjectnamehere"
2. Selected dowloaded WeScan folder
With Options: a. copy items if needed(unchecked)
b. create groups(selected)
3 Now i would like to add WeScan.framework but could not find it in downloaded folder.
4. I open complete sample project. There i see framework. So I picked that frame work and added to
my project.
5. When use import WeScan, it says no such module.

Could anyone help. I am very new to swift and Xcode.

Fix Travis CI

CI is currently failing with the following error:

[16:32:12]: --------------------
[16:32:12]: --- Step: danger ---
[16:32:12]: --------------------
[16:32:12]: $ bundle exec danger --dangerfile=/Users/travis/build/WeTransfer/WeScan/fastlane/../Submodules/WeTransfer-iOS-CI/Danger/Dangerfile
[16:32:12]: ▸ 
[16:32:12]: ▸ Could not set up API to Code Review site for Danger
[16:32:12]: ▸ 
[16:32:12]: ▸ For your GitHub repo, you need to expose: DANGER_GITHUB_API_TOKEN

Support passing in preferences when initializing the ImageScannerController

As discussed in #32 and #11, we should add support for passing in preferences to the ImageScannerController.

As per our previous discussions, I feel this should be implemented as a ‘parameter object’, where there is a ImageScannerPreferences class with variables that have default values. The host app initialized the preferences object, optionally changes a certain preference, and passes in the entire object to the ImageScannerController initializer.

This would allow us to add more preferences in the future without cluttering the initializer.

Example Implementation:

class ImageScannerPreferences {
  var shouldScanMultipleItems: Bool = true
}
class ImageScannerController {
  init(preferences: ImageScannerPreferences) { }
}

Then somewhere else (host app):

...
let prefs = ImageScannerPreferences()
prefs.shouldScanMultipleItems = true
let controller = ImageScannerController(preferences: prefs)

Image size reduction after scanning

When image is captured and then flattened , the difference between the original and scanned image is almost double i.e. output image is low quality/size.

Can you please fix or let us know the solution to this? We need more quality images with lesser loss.

Thanks,
Aditya

Improve Editable Rectangle

Right now dragging corners is allowed everywhere within the image bounds.
This should be improved by not allowing to drag corners on top of each other, as well as not allowing creating shapes that are not quadrilaterals.
This should be implemented in validPoint() of QuadrilateralView.

OpenCV for document edge detection

Thank you for starting this, we were going to open source our own thing at printtapp.com but now we saw you are doing the same we would love to team up with you and build a comprehensive scanning solution. Let me know if you are free for a call or chat.

The current implementation of the rectangle detection is fooled by folded pieces of paper, I would recommend moving to OpenCV edge detection which will do a better job with this

Here is a good article on how Dropbox did it:
https://blogs.dropbox.com/tech/2016/08/fast-and-accurate-document-detection-for-scanning/

Swift 4.2 support for iOS 12 SDK

Some of the methods in UIKit have been renamed again (Please apple can this be the last time!)

We just need to refactor some of the methods to use the correct form of methods such as bringSubviewToFront.

Auto Capture & Enhanced Scanned Image & Batch Scanning

- Auto Capture: Detect the document & capture it without pressing camera shutter button.

- Batch Scanning: multiple document scan at a time.

- Enhanced Scanned Image: Apply image processing to get the clear scanned image like Scannable App, CamScanner App etc...

- CameraRoll: Pick the image from camera roll & detect the document.

Please add this feature in your library ASAP.

Awesome document scanning library.

Edit screen to be optional.

is it possible that will be a mode that you can go from the scan screen to preview screen directly or maybe get the full result in the scan screen and passed to the delegates?

this way the SDK could fit everyone needs

Android Support

Whilst this is originally an iOS project, we would also like to help port it to Android. Let us know if the best way forward would be for WeTransfer to clone it to Android or perhaps add an android folder.

Cocoa Pod Issue

When Install "we Scan" through Cocoapod then following error will become :
[!] CocoaPods could not find compatible versions for pod "WeScan":
In Podfile:
WeScan (~> 1.0.0)

None of your spec sources contain a spec satisfying the dependency: WeScan (~> 1.0.0).

You have either:

  • out-of-date source repos which you can update with pod repo update or with pod install --repo-update.
  • mistyped the name or version.
  • not added the source repo that hosts the Podspec to your Podfile.

Note: as of CocoaPods 1.0, pod repo update does not happen on pod install by default.

Feature: Collection view(preview) holding scanned image, camera settings

This is really very nice plugin.

I would like to have more features which I think are already mentioned in github issues section.

But few I am currently missing are
1.Holding scanned images in collection view and way to edit(retake) particular one on selection from
collection view.
2.Option to have settings like flash light, front/rear camera etc.
3.Rotate(90,180,270,360degrees) captured image before adding to ImageScannerResults(while in edit scan).

It would be complete with this features added to component.

Localiztion / internationalization

HI,

Is there a way to alter or override the texts displayed by the ImageScannerController, according to the device language?

Regards
Tobias

Filter the cropped image as black and white or even better based on user selection!

As per request based on this topic

Would be a nice feature to add option to filter the cropped image as black and white (Use GPUImage to enhance scanned documents) or something similar !
What I really mean, would be nice to have different options for the user to adjust images manually like the brightness, contrast let's say "Details" (like a fine adjustment)
Note: Based on user selection, let's says that user want to scan a receipt, so we could have some "pre-setting" to process it and give the user a better user experience!

Rethink through the zoomed circle for squared corners

@nickseidel thanks for thinking with us here. I had the same discussion with @Boris-Em before, but he mentioned that we copied the exact same behaviour as the notes iOS app. Although we should handle Apple as a serious example, we might want to revisit this ourselves for better UX.

What can we improve?

We might want to show the zoomed circle in an offset of the touch point, so it's visible in all scenarios. This does mean we step away from the Apple way of doing this, but the question is either way whether users are used to this Apple behaviour and whether we should just do the best for our users instead.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.