I am a Full-Stack Software Developer
Hi! My name is Christopher Norton. I was born and raised in Calgary, Alberta, and lived there until I started the Software Engineering program at the University of Victoria.
I have been programming since 2011, when I was just 14 years old and enjoyed making simple games using Unity. I am a passionate developer, who is always striving to learn and use new technologies to their greatest capabilities. My areas of interest are primarily: Full-Stack Web Development and Machine Learning.
I currently am a Full-Stack software developer at TinyEye Technologies, where we develop an online therapy platform, which is primarily geared towards Speech Therapy.
RobotC was used to program a semi-autonomous robot. This robot used Infrared light to: detect a beacon, to pick up an object off the beacon, and then drop it off in a designated area. This process was simulating a robot which travelled underwater to retrieve debris.
This was completed as part of a team project, where I was primarily responsible for the programming of the robot, and my teammates were responsible for building it.
This is a web application built as a team of three, using AWS Amplify, React and GraphQL. The goal was to build an online fair experience, for members of the Port Alberni community which could no longer host their regular fair due to COVID-19. Features were discovered, planned, and built according to discussions with members of the fair committeee.
This online experience included: booths to advertise or sell products, links to live-streams (integrated with Facebook) and an online raffle.
This project was a success, and featured in the local news.
This was a project completed as my capstone project to complete the Software Engineering program at the University of Victoria. The goal of the project was to use real-time facial gesture recognition with the goal of helping elderly, autistic, or individuals with other disabilities express themselves. This project consisted of three majors modules: training the model, getting the users face and supplying it to the model, as well as validating the modal.
To train the model we used a convolutional neural network (CNN) with multiple filters. We also used regulalarizors as well as common methdologies to normalize all input values, and help prevent bias of larger inputs; in an attempt to prevent overfitting the model.
Thank you for taking the time to read through my portfolio.
Please feel free to contact me at any time, if you have any comments or questions.