Banner Image

All Services

Programming & Development

Algorithmic Programmer and Scientist

Experience in Summary: 20 years experience performing NMC&A (Nuclear Material Control and Accounting) and NDA (Non-Destructive Assay) in a technical lead role. Task such as measurement control, inventory, and statistical applications related to instrument calibrations, SNM (Special Nuclear Material) measurement analysis and measurement uncertainty analysis. SNM mass modeling and simulations. 5 years R experience in an industrial setting. Specific applications include linear and polynomial regression, kmeans clustering, and pattern analysis using kernel smoothing estimators, variance propagation and uncertainty analysis, forecasting with ARIMA and time series decomposition, population sampling, and text parsing. 5 years Shiny experience building dashboard applications with interactive graphics. Specific applications include sampling application dashboards, spectrum analysis application, and quality control charting application, various markdown reporting applications. I have my own personal medium scale HPC. It serves as a stepping stone for testing. Moving code from a PC to a large scale HPC with no testing can result in failures. The medium scale HPC serves as a good stepping stone and test bed for code before moving to a large-scale HPC. It offers cloud storage with 24 Tb of available storage, web site hosting, interactive web dashboards, and charts. My parallel computing resources support up to 88 virtual cores. All cores are on a single board and therefore talk time loss between cores is negligible. Large server banks suffer from network talk time between machines. Parallel programming on the HPC is performed with Spark. In addition, it is running both Keras and TensorFlow. Computing time is leased and scheduled. My HPC offers up to 125 Gig of memory and 1 Tb of swap space. This equates to the ability to perform in-memory computations on data vectors up to around 1Tb in size. Interactive Shiny web applications can access all computing resources on the server and are interfaced using SparkR. The server has both apache and Shiny servers running. Access is provided by high-speed business-class Internet.

About

$85/hr Ongoing

Download Resume

Experience in Summary: 20 years experience performing NMC&A (Nuclear Material Control and Accounting) and NDA (Non-Destructive Assay) in a technical lead role. Task such as measurement control, inventory, and statistical applications related to instrument calibrations, SNM (Special Nuclear Material) measurement analysis and measurement uncertainty analysis. SNM mass modeling and simulations. 5 years R experience in an industrial setting. Specific applications include linear and polynomial regression, kmeans clustering, and pattern analysis using kernel smoothing estimators, variance propagation and uncertainty analysis, forecasting with ARIMA and time series decomposition, population sampling, and text parsing. 5 years Shiny experience building dashboard applications with interactive graphics. Specific applications include sampling application dashboards, spectrum analysis application, and quality control charting application, various markdown reporting applications. I have my own personal medium scale HPC. It serves as a stepping stone for testing. Moving code from a PC to a large scale HPC with no testing can result in failures. The medium scale HPC serves as a good stepping stone and test bed for code before moving to a large-scale HPC. It offers cloud storage with 24 Tb of available storage, web site hosting, interactive web dashboards, and charts. My parallel computing resources support up to 88 virtual cores. All cores are on a single board and therefore talk time loss between cores is negligible. Large server banks suffer from network talk time between machines. Parallel programming on the HPC is performed with Spark. In addition, it is running both Keras and TensorFlow. Computing time is leased and scheduled. My HPC offers up to 125 Gig of memory and 1 Tb of swap space. This equates to the ability to perform in-memory computations on data vectors up to around 1Tb in size. Interactive Shiny web applications can access all computing resources on the server and are interfaced using SparkR. The server has both apache and Shiny servers running. Access is provided by high-speed business-class Internet.

Skills & Expertise

AnalyticsApacheApp DevelopmentData ManagementDatabase DevelopmentFinancial ForecastingIndustrial DesignLead GenerationMicrosoftModelingNetworkingPattern DesignProgrammingRR ProgrammingReport WritingScienceServer AdministrationSoftware DevelopmentSpreadsheetsSQLVector GraphicsWeb DevelopmentWriting

0 Reviews

This Freelancer has not received any feedback.