Giter Club home page Giter Club logo

smutdetect4autopsy's People

Contributors

rajwitt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

smutdetect4autopsy's Issues

Rigan Ap-Apid's "An Algorithm for Nudity Detection"

I remember looking at different classifications and feel this could possibly work with Smutdetect autopsy or standalone.

Other's have implemented the algorithm in JavaScript and PHP.
https://github.com/pa7/nude.js
https://github.com/bediger4000/NudeDetectorPHP

I've tried the algorithm in python, but needs tweaking and I've not transpiled or refractored it in java!

import cv2
import numpy as np

def skin_detection(image):
    height, width = image.shape[:2]
    for y in range(height):
        for x in range(width):
            rgb = image[y, x]
            normalized_rgb = rgb / 255.0
            hsv = cv2.cvtColor(np.array([[normalized_rgb]], dtype=np.float32), cv2.COLOR_BGR2HSV)[0][0]

            is_skin = is_skin_pixel(normalized_rgb, hsv)

            if is_skin:
               image[y, x] = (0, 0, 255)
            else:
                image[y, x] = (0, 0, 0)

    return image

def is_skin_pixel(normalized_rgb, hsv):
    lower_hsv_bound = np.array([0, 15, 30])
    upper_hsv_bound = np.array([255, 255, 255])

    return np.all((lower_hsv_bound <= hsv) & (hsv <= upper_hsv_bound))

def count_skin_pixels(image):
    return np.count_nonzero(np.all(image == (0, 0, 255), axis=-1))

def identify_connected_skin_pixels(image):
    # Assuming 'image' is your original color image
    gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    _, binary_image = cv2.threshold(gray_image, 127, 255, cv2.THRESH_BINARY)
    
    # Perform connected components labeling
    labeled_image, nlabels = cv2.connectedComponents(np.uint8(binary_image))

    # Ensure nlabels is a scalar value
    nlabels_scalar = np.max(nlabels)

    # Initialize an empty list to store skin regions
    skin_regions = []

    ## Iterate through unique labels (excluding background label 0)
    for label in range(1, nlabels_scalar + 1):
        # Use np.where to find coordinates for the current label
        label_coordinates = np.where(labeled_image == label)
        
        # Append the coordinates to the list of skin regions
        skin_regions.append(label_coordinates)

    return skin_regions

def identify_three_largest_skin_regions(skin_regions):
    sorted_regions = sorted(skin_regions, key=len, reverse=True)
    return sorted_regions[:3]

def calculate_bounding_polygon(skin_region):
    leftmost_pixel_x = np.min(skin_region[1])
    uppermost_pixel_y = np.min(skin_region[0])
    rightmost_pixel_x = np.max(skin_region[1])
    lowermost_pixel_y = np.max(skin_region[0])

    bounding_polygon = np.array([
        [leftmost_pixel_x, uppermost_pixel_y],
        [rightmost_pixel_x, uppermost_pixel_y],
        [rightmost_pixel_x, lowermost_pixel_y],
        [leftmost_pixel_x, lowermost_pixel_y]
    ])

    return bounding_polygon

def calculate_bounding_polygon_area(bounding_polygon):
    return cv2.contourArea(bounding_polygon)

def count_skin_pixels_inside_bounding_polygon(bounding_polygon, image):
    skin_pixels_inside_polygon = cv2.pointPolygonTest(bounding_polygon, np.where(np.all(image == (0, 0, 255), axis=-1)), False)
    return np.count_nonzero(skin_pixels_inside_polygon == 1)

def calculate_percentage_skin_pixels_inside_bounding_polygon(skin_pixel_count_inside_polygon, bounding_polygon_area):
    return (skin_pixel_count_inside_polygon / bounding_polygon_area) * 100

def calculate_average_intensity_of_pixels_inside_bounding_polygon(bounding_polygon, image):
    intensity_array = image[bounding_polygon[:, 1], bounding_polygon[:, 0], 2]
    return np.mean(intensity_array)

# Example usage:

image = cv2.imread('mod.jpg')

# Step 1: Perform skin detection.
skin_detected_image = skin_detection(image)

# Step 2: Identify connected skin pixels.
skin_regions = identify_connected_skin_pixels(skin_detected_image)

# Step 3: Identify the three largest skin regions.
largest_skin_regions = identify_three_largest_skin_regions(skin_regions)

# Initialize variables to store information about the three largest skin regions
bounding_polygons = []
bounding_polygon_areas = []
skin_pixel_counts_inside_polygons = []
percentage_skin_pixels_inside_polygons = []
average_intensity_inside_polygons = []

# Step 4-15: Calculate bounding polygons, areas, skin pixels, percentages, and average intensity    
for idx, skin_region in enumerate(largest_skin_regions):
    bounding_polygon = calculate_bounding_polygon(skin_region)
    bounding_polygon_area = calculate_bounding_polygon_area(bounding_polygon)
    skin_pixel_count_inside_polygon = count_skin_pixels_inside_bounding_polygon(bounding_polygon, skin_detected_image)
    percentage_skin_pixels_inside_polygon = calculate_percentage_skin_pixels_inside_bounding_polygon(
        skin_pixel_count_inside_polygon, bounding_polygon_area
    )
    average_intensity = calculate_average_intensity_of_pixels_inside_bounding_polygon(bounding_polygon, skin_detected_image)

    # Store information about each skin region
    bounding_polygons.append(bounding_polygon)
    bounding_polygon_areas.append(bounding_polygon_area)
    skin_pixel_counts_inside_polygons.append(skin_pixel_count_inside_polygon)
    percentage_skin_pixels_inside_polygons.append(percentage_skin_pixels_inside_polygon)
    average_intensity_inside_polygons.append(average_intensity)

    print(f"Region {idx + 1}:")
    print(f"Bounding Polygon Area: {bounding_polygon_area}")
    print(f"Percentage of skin pixels inside the polygon: {percentage_skin_pixels_inside_polygon:.2f}%")
    print(f"Average Intensity inside the polygon: {average_intensity:.2f}")
    print("-" * 30)


# Step 16: Classify the image based on the provided criteria
skin_percentage = (count_skin_pixels(skin_detected_image) / (image.shape[0] * image.shape[1])) * 100

if skin_percentage < 15:
    classification = "Not Nude"
else:
    total_skin_count = count_skin_pixels(skin_detected_image)
    largest_region_skin_pixels = len(largest_skin_regions[0][0])
    second_largest_region_skin_pixels = len(largest_skin_regions[1][0])
    third_largest_region_skin_pixels = len(largest_skin_regions[2][0])

    if (
        largest_region_skin_pixels < 0.35 * total_skin_count and
        second_largest_region_skin_pixels < 0.30 * total_skin_count and
        third_largest_region_skin_pixels < 0.30 * total_skin_count
    ):
        classification = "Not Nude"
    elif largest_region_skin_pixels < 0.45 * total_skin_count:
        classification = "Not Nude"
    else:
        classification = "Nude"

print(f"Image Classification: {classification}")

Fails to install in Autopsy 4.21.0 Java 18.0.1.1 unpack200

I can download and add plugin, but on Install I get this message:
Cannot complete the validation of downloaded plugins

The validation of downloaded plugins cannot be completed, cause: NBM C:\Users\cam30\AppData\Roaming\autopsy\update\download\uk-co-smutdetect.nbm needs unpack200 to process following entries:
netbeans/modules/uk-co-smutdetect.jar.pack.gz

My Java Version is:

java -version
java version "18.0.1.1" 2022-04-22
Java(TM) SE Runtime Environment (build 18.0.1.1+2-6)
Java HotSpot(TM) 64-Bit Server VM (build 18.0.1.1+2-6, mixed mode, sharing)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.