Giter Club home page Giter Club logo

machine-learning-engineering-for-production-public's People

Contributors

andres-zartab avatar cfav-dev avatar tugceoz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

machine-learning-engineering-for-production-public's Issues

C4_W3_Lab_2_TFX_Custom_Components: AttributeError: module 'tfx.v1' has no attribute 'orchestration'

I get the error AttributeError: module 'tfx.v1' has no attribute 'orchestration' when I run the Ungraded Lab: Developing TFX Custom Components in colab where a LocalDagRunner is created:

tfx.orchestration.LocalDagRunner().run(
  _create_pipeline(
      pipeline_name=PIPELINE_NAME,
      pipeline_root=PIPELINE_ROOT,
      data_root=DATA_ROOT,
      module_file=_trainer_module_file,
      serving_model_dir=SERVING_MODEL_DIR,
      metadata_path=METADATA_PATH))

It seems the error also happens in the TFX tutorial notebooks as reported in tensorflow/tfx#5610

Filepaths are 72% of the max windows filepath length

I ran into a issue cloning the repository on my Windows machine due to the very lengthy filepaths in the repository. The longest filepath in the repository is:
machine-learning-engineering-for-production-public/course4/week2-ungraded-labs/C4_W2_Lab_2_Intro_to_Kubernetes/saved_model_half_plus_two_cpu/00000123/variables/variables.data-00000-of-00001 which is a whopping 189 characters. Sadly, the Windows OS MAX_PATH is still only 260 characters. This leaves users only 71 characters of filepath remaining, which in my case was not enough to store the repository where I wanted to on my system. I can solve the issue by cloning the repo near my root C:/ then changing the repository name to something shorter, but shorter paths within the repository would be preferred.

Course 4 : error collecting test_clf.py

There seems to be an error with the scikit-learn version interaction with pickle load.

This was encountered when trying to apply the unit test 'test_clf.py' on commit. sklearn is downloaded with requirements.txt, this may be the problem, a specific version would likely fix this issue.

Small typo in client.ipynb

Hey there :)

There's a small typo in the second-last cell:

However, the code you just experience with is close to what you see in real production environments.

This should probably be: "...the code you just had experience with"

Thanks

Ruuning c1w1 lab through docker using windows

I had to change the way i ran the docker image since I used windows:

docker run -it --rm -p 8888:8888 -p 8000:8000 --mount type=bind,source="%cd%",target=/home/jovyan/work deeplearningai/mlepc1w1-ugl:jupyternb

error in local setup for deploying deep model C1W1

the reqquirement.txt seems to be empty.
I am in the C:\Users\XYZ\machine-learning-engineering-for-production-public\course1\week1-ungraded-lab>

and when I run ls it throws error as command not recognized

mlep-w1-lab tensorflow model out of date?

I am running the notebook and everything was working fine until I got to the following:
It seems like the external model or something has changed such that this is no longer working.

import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
​
​
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
    """Detects common objects on an image and creates a new image with bounding boxes.
​
    Args:
        filename (str): Filename of the image.
        model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
        confidence (float, optional): Desired confidence level. Defaults to 0.5.
    """
    
    # Images are stored under the images/ directory
    img_filepath = f'images/{filename}'
    
    # Read the image into a numpy array
    img = cv2.imread(img_filepath)
    
    # Perform the object detection
    bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
    
    # Print current image's filename
    print(f"========================\nImage processed: {filename}\n")
    
    # Print detected objects with confidence level
    for l, c in zip(label, conf):
        print(f"Detected object: {l} with confidence level of {c}\n")
    
    # Create a new image that includes the bounding boxes
    output_image = draw_bbox(img, bbox, label, conf)
    
    # Save the image in the directory images_with_boxes
    cv2.imwrite(f'images_with_boxes/{filename}', output_image)
    
    # Display the image with bounding boxes
    display(Image(f'images_with_boxes/{filename}'))
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[4], line 2
      1 import cv2
----> 2 import cvlib as cv
      3 from cvlib.object_detection import draw_bbox
      6 def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\cvlib\__init__.py:8
      6 from .face_detection import detect_face
      7 from .object_detection import detect_common_objects
----> 8 from .gender_detection import detect_gender
      9 from .utils import get_frames, animate

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\cvlib\gender_detection.py:3
      1 import os
      2 import cv2
----> 3 from tensorflow.keras.utils import get_file
      5 initialize = True
      6 gd = None

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\__init__.py:41
     38 import six as _six
     39 import sys as _sys
---> 41 from tensorflow.python.tools import module_util as _module_util
     42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
     44 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\python\__init__.py:41
     33 # We aim to keep this file minimal and ideally remove completely.
     34 # If you are adding a new file with @tf_export decorators,
     35 # import it in modules_with_exports.py instead.
     36 
     37 # go/tf-wildcard-import
     38 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top
     40 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
---> 41 from tensorflow.python.eager import context
     43 # pylint: enable=wildcard-import
     44 
     45 # Bring in subpackages.
     46 from tensorflow.python import data

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\python\eager\context.py:33
     30 import numpy as np
     31 import six
---> 33 from tensorflow.core.framework import function_pb2
     34 from tensorflow.core.protobuf import config_pb2
     35 from tensorflow.core.protobuf import rewriter_config_pb2

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\core\framework\function_pb2.py:16
     11 # @@protoc_insertion_point(imports)
     13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
     17 from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
     18 from tensorflow.core.framework import op_def_pb2 as tensorflow_dot_core_dot_framework_dot_op__def__pb2

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py:16
     11 # @@protoc_insertion_point(imports)
     13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
     17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
     18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\core\framework\tensor_pb2.py:16
     11 # @@protoc_insertion_point(imports)
     13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
     17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
     18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py:16
     11 # @@protoc_insertion_point(imports)
     13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
     17 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
     20 DESCRIPTOR = _descriptor.FileDescriptor(
     21   name='tensorflow/core/framework/resource_handle.proto',
     22   package='tensorflow',
   (...)
     26   ,
     27   dependencies=[tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2.DESCRIPTOR,tensorflow_dot_core_dot_framework_dot_types__pb2.DESCRIPTOR,])

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py:36
     13 _sym_db = _symbol_database.Default()
     18 DESCRIPTOR = _descriptor.FileDescriptor(
     19   name='tensorflow/core/framework/tensor_shape.proto',
     20   package='tensorflow',
   (...)
     23   serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"z\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x12\x14\n\x0cunknown_rank\x18\x03 \x01(\x08\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tB\x87\x01\n\x18org.tensorflow.frameworkB\x11TensorShapeProtosP\x01ZSgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/tensor_shape_go_proto\xf8\x01\x01\x62\x06proto3')
     24 )
     29 _TENSORSHAPEPROTO_DIM = _descriptor.Descriptor(
     30   name='Dim',
     31   full_name='tensorflow.TensorShapeProto.Dim',
     32   filename=None,
     33   file=DESCRIPTOR,
     34   containing_type=None,
     35   fields=[
---> 36     _descriptor.FieldDescriptor(
     37       name='size', full_name='tensorflow.TensorShapeProto.Dim.size', index=0,
     38       number=1, type=3, cpp_type=2, label=1,
     39       has_default_value=False, default_value=0,
     40       message_type=None, enum_type=None, containing_type=None,
     41       is_extension=False, extension_scope=None,
     42       serialized_options=None, file=DESCRIPTOR),
     43     _descriptor.FieldDescriptor(
     44       name='name', full_name='tensorflow.TensorShapeProto.Dim.name', index=1,
     45       number=2, type=9, cpp_type=9, label=1,
     46       has_default_value=False, default_value=_b("").decode('utf-8'),
     47       message_type=None, enum_type=None, containing_type=None,
     48       is_extension=False, extension_scope=None,
     49       serialized_options=None, file=DESCRIPTOR),
     50   ],
     51   extensions=[
     52   ],
     53   nested_types=[],
     54   enum_types=[
     55   ],
     56   serialized_options=None,
     57   is_extendable=False,
     58   syntax='proto3',
     59   extension_ranges=[],
     60   oneofs=[
     61   ],
     62   serialized_start=149,
     63   serialized_end=182,
     64 )
     66 _TENSORSHAPEPROTO = _descriptor.Descriptor(
     67   name='TensorShapeProto',
     68   full_name='tensorflow.TensorShapeProto',
   (...)
    100   serialized_end=182,
    101 )
    103 _TENSORSHAPEPROTO_DIM.containing_type = _TENSORSHAPEPROTO

File ~\miniconda3\envs\mlep-w1-lab\lib\site-packages\google\protobuf\descriptor.py:561, in FieldDescriptor.__new__(cls, name, full_name, index, number, type, cpp_type, label, default_value, message_type, enum_type, containing_type, is_extension, extension_scope, options, serialized_options, has_default_value, containing_oneof, json_name, file, create_key)
    555 def __new__(cls, name, full_name, index, number, type, cpp_type, label,
    556             default_value, message_type, enum_type, containing_type,
    557             is_extension, extension_scope, options=None,
    558             serialized_options=None,
    559             has_default_value=True, containing_oneof=None, json_name=None,
    560             file=None, create_key=None):  # pylint: disable=redefined-builtin
--> 561   _message.Message._CheckCalledFromGeneratedFile()
    562   if is_extension:
    563     return _message.default_pool.FindExtensionByName(full_name)

TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

cv2 can not be installed successfully on Mac with M1 chip. The jupyter lab server.ipynb would stop running after the cell 'importing cv2'.

The pip install of cvlib is successful with the conda environment method and the docker method. Both methods would fail when running import cvlib as cv.

Running pip install TensorFlow then pip install cvlib in base python is successful, however, when importing cvlib, I get error "illegal hardware instruction python"

Is this because I am using Mac with an M1 chip? Any help would be highly appreciated.

Thanks a lot

Jie

Cannot do the 'docker run' command in the course1 week1-lab on the path with space (Windows Users)

I'm on Windows 10

I am trying to do the Method 2: Docker to do the lab
After I have successfully pulled the docker with docker pull deeplearningai/mlepc1w1-ugl:jupyternb. I then, proceed to run the second command docker run -it --rm -p 8888:8888 -p 8000:8000 --mount type=bind,source=$(pwd),target=/home/jovyan/work deeplearningai/mlepc1w1-ugl:jupyternb

I then get the error

invalid argument "type=bind,source=C:/Users/msi/My" for "--mount" flag: target is required
See 'docker run --help'.
(mlep-w1-lab)

I suspected that my path is truncated (C:/Users/msi/My). What is the workaround here? How can I escape the space?

image

Conda on mac M2 lab1 week1

I'm using a mac air with a m2 processor. I am facing a problem in doing a lab work. Running through docker or in a browser had no effect and I decided to run the lab by installing all dependencies. From the beginning I found that conda is only on the m1 processor. If you have the same problem, here is a link to a video with conda installation on m2. https://www.youtube.com/watch?v=O5Dv0gyDA8I

Can't make it work on Docker Desktop K8s (Mac)

Hi

In order to make the deployment work on K8s by Docker Desktop (Mac) I had to modify the deployment manifest:

  volumes:
      - name: tf-serving-volume
        hostPath:
          #path: /var/tmp/saved_model_half_plus_two_cpu
          path: /tmp/saved_model_half_plus_two_cpu
          #type: Directory

Then verified the deployment was up and running:

kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
tf-serving-deployment-6c4f4dc9d6-nbhq8   1/1     Running   0          53m

Then I forwarded the pod port:

kubectl port-forward tf-serving-deployment-6c4f4dc9d6-nbhq8 8501:8501
Forwarding from 127.0.0.1:8501 -> 8501
Forwarding from [::1]:8501 -> 8501

And when trying to run a POST command to get a prediction, the pod refuses the connection request:

curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
curl: (52) Empty reply from server

E0225 10:46:00.303384 62138 portforward.go:400] an error occurred forwarding 8501 -> 8501: error forwarding port 8501 to pod aaecd472db77fbd0d7684f213bf653f70f4fdede346e05d5c2b255d3b495fed4, uid : exit status 1: 2022/02/25 16:46:00 socat[36952] E connect(16, AF=2 127.0.0.1:8501, 16): Connection refused

This is unusual to me as K8s from Docker Desktop normally behave like a regular K8s deployment. I'd appreciate some guidance as I think that it'd be good to understand how TF serving works in a real K8s cluster and not only on Minikube/VirtualBox.

thanks

Mac M1 Guide does not work: conda PackagesNotFoundError

I tried to run conda install -c apple tensorflow-deps, but got the following error:
`PackagesNotFoundError: The following packages are not available from current channels:

  • tensorflow-deps

Current channels:

Similarly, running python -m pip install tensorflow-macos I am getting the error ERROR: Could not find a version that satisfies the requirement tensorflow-macos (from versions: none) ERROR: No matching distribution found for tensorflow-macos.

How could I resolve this issue?

conlist() Error

When running week2-ungraded-labs/C4_W2_Lab_1_FastAPI_Docker/with-batch

the call to conlist() causes an error.
This error is due to using the syntax for Pydantic 1.10
When following the Readme instructions the latest version of Pydantic 2.3 is used

This breaks the code as conlist now expects min_length and max_length instead of min_items and max_items as keyword arguments

mlepc4w2-ugl Docker repo does not exist

This is used by the C4_W2_Lab_3_Latency_Test_Compose lab. When running the "docker-compose up" command, I get the error

"Error response from daemon: pull access denied for mlepc4w2-ugl, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"

and if I go to Docker Hub, the repo actually is not there.

Can you please advise?

Unable to install tensorflow-deps on Mac M1

When I run the code conda install -c apple tensorflow-deps, I received the follow error:

PackagesNotFoundError: The following packages are not available from current channels:

  - tensorflow-deps

Current channels:

  - https://conda.anaconda.org/apple/osx-64
  - https://conda.anaconda.org/apple/noarch
  - https://repo.anaconda.com/pkgs/main/osx-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/osx-64
  - https://repo.anaconda.com/pkgs/r/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

Missing correct ipython kernel in c1w1, conda method

While following instructions after launching the Jupiter lab, I noticed that kernel was missing.
The default kernel named "Python" could not be launched due to an exception regarding distutils.

Traceback from Jupyter lab server log:

[I 2022-06-12 11:22:32.002 ServerApp] AsyncIOLoopKernelRestarter: restarting kernel (5/5), new random ports
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/bob/.local/lib/python3.6/site-packages/ipykernel_launcher.py", line 15, in <module>
    from ipykernel import kernelapp as app
  File "/home/bob/.local/lib/python3.6/site-packages/ipykernel/__init__.py", line 2, in <module>
    from .connect import *
  File "/home/bob/.local/lib/python3.6/site-packages/ipykernel/connect.py", line 16, in <module>
    import jupyter_client
  File "/home/bob/.local/lib/python3.6/site-packages/jupyter_client/__init__.py", line 6, in <module>
    from .asynchronous import AsyncKernelClient  # noqa
  File "/home/bob/.local/lib/python3.6/site-packages/jupyter_client/asynchronous/__init__.py", line 1, in <module>
    from .client import AsyncKernelClient  # noqa
  File "/home/bob/.local/lib/python3.6/site-packages/jupyter_client/asynchronous/client.py", line 8, in <module>
    from jupyter_client.client import KernelClient
  File "/home/bob/.local/lib/python3.6/site-packages/jupyter_client/client.py", line 20, in <module>
    from .connect import ConnectionFileMixin
  File "/home/bob/.local/lib/python3.6/site-packages/jupyter_client/connect.py", line 27, in <module>
    from jupyter_core.paths import jupyter_data_dir  # type: ignore
  File "/usr/lib/python3/dist-packages/jupyter_core/paths.py", line 21, in <module>
    from distutils.util import strtobool
ModuleNotFoundError: No module named 'distutils.util'

This issue can be avoided by registering the conda environment python runtime with ipython, using the following command

python -m ipykernel install --user --name mlep-w1-lab

I'll create a pull request with an updated readme file if given the green light.

wrong path in README.md

In README.md there is
cd MLEP-public/week1-ungraded-lab
but it should be
cd MLEP-public/course1/week1-ungraded-lab

tensorflow packages in m1 mac

ERROR: Could not find a version that satisfies the requirement tensorflow-macos (from versions: none)
ERROR: No matching distribution found for tensorflow-macos

Wrong path in week 1 README.md

The instruction for changing directories is currently

cd MLEP-public/course1/week1-ungraded-lab

But it should be using the new repo name:

cd machine-learning-engineering-for-production-public/course1/week1-ungraded-lab

Unable to clone repo

Hello,

I can not clone the repo, see attachment for print screen of the error message.

Regards.

Capture

C3_W5_Lab_1_Shap_Values: ERROR: No matching distribution found for tensorflow==2.4.3

In the notebook C3_W5_Lab_1_Shap_Values, in Section DeepExplainer the following code cell

# Take a random sample of 5000 training images
background = x_train[np.random.choice(x_train.shape[0], 5000, replace=False)]

# Use DeepExplainer to explain predictions of the model
e = shap.DeepExplainer(model, background)

# Compute shap values
# shap_values = e.shap_values(x_test[1:5])

yields the following output:

keras is no longer supported, please use tf.keras instead.
Your TensorFlow version is newer than 2.4.0 and so graph support has been removed in eager mode and some static graphs may not be supported. See PR #1483 for discussion.

In an earlier code cell in section Imports

!pip install tensorflow==2.4.3

yields

ERROR: Could not find a version that satisfies the requirement tensorflow==2.4.3 (from versions: 2.8.0rc0, 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.10.0rc0, 2.10.0rc1, 2.10.0rc2, 2.10.0rc3, 2.10.0, 2.10.1, 2.11.0rc0, 2.11.0rc1, 2.11.0rc2, 2.11.0, 2.11.1, 2.12.0rc0, 2.12.0rc1, 2.12.0)
ERROR: No matching distribution found for tensorflow==2.4.3

The issue affects commit 34af5af3 in branch main as of 13.04.2023

Windows 10 (conda): no module named cv2

When I activated the conda environment and ran jupyter lab, I got the following error while importing cv2

no module named cv2

Fix: Run python -m jupyterlab instead of jupyter lab in cmd.

Reason: When you run jupyter lab command, the path variables of jupyter lab are default system path variables. These differ from environment path variables. You can see this by importing sys and checking sys.path.

So python -m jupyterlab will open the jupyterlab with same path variables as environment path variables.

I would like you to update the README.md accordingly.

c4 w1 tensorflow serving

Item link
Post :
https://community.deeplearning.ai/t/c4-w1-lab-2-unable-to-run-tensorflow-model-server-in-colab-notebook-c4-w1-lab-2-tf-serving-ipynb/136391

Lab link :
https://colab.research.google.com/github/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week1-ungraded-labs/C4_W1_Lab_3_TFS.ipynb

Description
Base ubuntu distro (bionic) doesn't contain 2.29 version of glibc. Here's the link. As a result, tensorflow serving fails to start server.

Proposed solution
Recommend update to jammy lts to safely install dependencies for tensorflow-serving.

ERROR: Could not find a version that satisfies the requirement tensorflow==2.3.1

Issue with the requirements.txt file. Pip is throwing an error for installing the tensorflow version specified in the c1w1-ugl.

ERROR: Could not find a version that satisfies the requirement tensorflow==2.3.1 (from versions: 2.5.0rc0, 2.5.0rc1, 2.5.0rc2, 2.5.0rc3, 2.5.0, 2.5.1, 2.5.2, 2.6.0rc0, 2.6.0rc1, 2.6.0rc2, 2.6.0, 2.6.1, 2.6.2, 2.7.0rc0, 2.7.0rc1, 2.7.0) ERROR: No matching distribution found for tensorflow==2.3.1

Can someone check to confirm that 2.3.1 is correct version; it looks like it's no longer supported via pip. I'm using venv to install virtual env on Windows 10 machine.

I was able to install tensorflow without a problem via pip install tensorflow. Then deleted the tensorflow line from requirements file and installed the remaining modules.

Invalid credentials when running Jupyter notebook in Docker

I wanted to start a jupyter notebook in Docker following the instructions here.

I start the container with the command:

docker run -it --rm -p 8888:8888 -p 8000:8000 --mount type=bind,source="$(pwd)",target=/home/jovyan/work deeplearningai/mlepc1w1-ugl:jupyternb

The container starts and I can see Jupyter going to localhost:8888, however, when I copy the token in the login form, I always get a message saying invalid credentials.

image

How can I login into the notebook?

Thanks a lot for the help!

For M1 Mac I had to install ffmpeg@4 to be compatible with the required OpenCV

The OpenCV 4.5.3 depends on ffmpeg v.4 as it looks for file libavcodec.58.dylib specifically. As I am running the server notebook of course1-week1 ungraded lab after two years of its introduction, the current ffmpeg version is 6 where the file mentioned above does not exist anymore.
It would be great if this info is added to the env setup instructions so that people install the correct required version of ffmpeg right away.
https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course1/week1-ungraded-lab/server.ipynb

See resource for required ffmpeg version: https://www.linuxfromscratch.org/blfs/view/11.0-systemd/general/opencv.html

C4_W1_docker_cannot bring up local hosts

I'm running it on windows 11

I have executed the command below
C:\Users\fahmi>docker run --rm -p 8501:8051^ --mount type=bind,^ source=C:/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu,^ target=/models/half_plus_two^ -e MODEL_NAME=half_plus_two -t tensorflow/serving

and it works but when I try to open localhost:8501 there is a message This page isn't workinglocalhost didn't send any data. ERR_EMPTY_RESPONSE. does anyone know the problem?

and here's the bottom of the log
Successfully loaded servable version {name: half_plus_two version: 123} 2023-05-08 02:43:27.637689: I tensorflow_serving/model_servers/server_core.cc:486] Finished adding/updating models 2023-05-08 02:43:27.639982: I tensorflow_serving/model_servers/server.cc:118] Using InsecureServerCredentials 2023-05-08 02:43:27.640398: I tensorflow_serving/model_servers/server.cc:383] Profiler service is enabled 2023-05-08 02:43:27.658633: I tensorflow_serving/model_servers/server.cc:409] Running gRPC ModelServer at 0.0.0.0:8500 ... [warn] getaddrinfo: address family for nodename not supported 2023-05-08 02:43:27.668963: I tensorflow_serving/model_servers/server.cc:430] Exporting HTTP/REST API at:localhost:8501 ... [evhttp_server.cc : 245] NET_LOG: Entering the event loop ...

error downloading yolo

while runing detect_and_draw_box() in part-1, It is throwing below error:

Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
Could not establish connection. Download failed
Downloading yolov3_classes.txt from https://github.com/arunponnusamy/object-detection-opencv/raw/master/yolov3.txt
Could not establish connection. Download failed

FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\karri.anusha\.cvlib\object_detection\yolo\yolov3\yolov3_classes.txt'

please assist

Building wheel for opencv-python-headless error

Hi, I got this error and can't find a solution when I installed the required packages.
Mac Pro intel
ERROR: Failed building wheel for opencv-python
ERROR: Could not build wheels for opencv-python, which is required to install pyproject.toml-based projects

Broken link in C4_W3_Lab_1_Kubeflow_Pipelines.ipynb

In the lab "C4_W3_Lab_1_Kubeflow_Pipelines.ipynb"
Trying to run this command:
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION&timeout=300" with PIPELINE_VERSION=1.7.0

generates the following error on my ubuntu 20.04:
error: trouble checking out href 1.7.0&timeout=300: exit status 1

The link leads to an empty github page.

Wrong name in docker

Hello,
I think you have misspelled the dockerhub address: It should be deeplearningai/mlepc1w1-ugl instead of deeplearningai/mlepc1w1-ugl:jupyternb.

Thanks

C3_W4_Model Remediation: a reference link is broken

  • Item link: Not provide
  • Specific issue area: Optional resources
  • Description: One of the links provides to additional resources is broken with 404 error
  • Proposed solution: Update or remove the link from the list provide.

Running docker image container in the background or detached mode

The problem
When i run command from wsl- terminal
docker run --rm -p 80:80 mlepc4w2-ugl:no-batch
it basically run and stop taking cURL commands , so need to open some other terminal command window or CMD .
Solution
to run specified cURL commands directly from wsl (terminal) , we should run docker image container in the background or detached mode
you can use the docker run command followed by the -d flag (or detached flag) and followed by the name of the docker image
what worked for me
docker run -d -p 80:80 mlepc4w2-ugl:no-batch

bind source path does not exist: /tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu.

Expected Behavior

In C4_W1_Lab_2_TFS_Docker.md, running TF model using TF-Serving after executing:

docker run --rm -p 8501:8501 \
  --mount type=bind,\
source=/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu,\
target=/models/half_plus_two \
  -e MODEL_NAME=half_plus_two -t tensorflow/serving &

Actual Behavior

Getting next error after run above command:

docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu

Steps to Reproduce the Problem

Follow all steps in: https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week1-ungraded-labs/C4_W1_Lab_2_TFS_Docker.md

Specifications

  • Platform: Ubuntu 22.04.1 LTS
  • Subsystem: Docker version 20.10.23, build 7155243

Course 3 | Week 2: TfidfVectorizer has no method get_feature_names()

In the ungraded lab of week 2 in course 3 (Ungraded lab: Algorithmic Dimensionality Reduction), the method get_feature_names() for TfidfVectorizer does not exists.

According to the sklearn documentation it need to be changed to get_feature_names_out().
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html

There are two occurrences with vectorizer.get_feature_names() in the notebook.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.