3d printing

 

1.INTRODUCTION

New production technologies like 3D printing and other adaptive manufacturing technologies have changed the industrial manufacturing process, often referred to as next industrial revolution. Such Computing Physical Production Systems combine virtual and real world through digitization, model building process simulation and optimization. It is commonly understood that measurement technologies are the key to combine the real and virtual worlds.

     This change from measurement as a quality control tool to a fully integrated step in the production process has also changed the requirements for 3D metrology solutions. At the same time it is obvious that these processes not only require more measurements but also systems to deliver the required information in high density in a short time. Here optical solutions including photogrammetry for 3D measurements have big advantages over traditional mechanical CMM’s.

    Accepted tool or exotic niche. As late as in the year 1984 Gottfried Konecny states in his textbook on Photogrammetry: “Terrestrial photogrammetry has … some disadvantages … and is used only in special applications.” [Konecny, Lehmann 1984]. Industrial applications were even less favoured. The need to process and develop the film first, thus delaying the delivery of results for hours or even days made the technology a very exotic tool in certain applications. This changed with the availability of digital cameras and powerful and yet affordable computing devices starting in the late eighties. Among the pioneers in digital photogrammetry was Prof. Wilfried Wester-Ebbinghaus. He developed digital solutions necessary for image measurement and camera calibration, bridging large scale photography with digital scanning and Roseau technology to form high end photogrammetry solutions for industrial applications.

     Newer generations of digital cameras with larger sensors fostered the ability to deliver 3D measurement results on multiple points immediately after the measurement was taken, nowadays sometimes even in real-time. These features, together with highest accuracy and high flexibility and mobility of camera based measurement systems creates the core advantages of industrial photogrammetry up to now.

First systems developed with focus on industrial applications were using a single handheld camera to be moved around the objects and used mainly in medium to large size industrial structures. Photogrammetry and remote sensing are two related fields. This is also manifest in national and international organizations. The International Society of Photogrammetry and Remote Sensing (ISPRS) is a non-governmental organization devoted to the advancement of photogrammetry and remote sensing and their applications. It was founded in 1910. There are only a few manufacturers of photogrammetric equipment. The two leading companies are Leica (a recent merger of the former Swiss companies Wild and Kern), and Carl Zeiss of Germany (before unification there were two separate companies: Zeiss Oberkochen and Zeiss Jena).


1.1 DEFINITONS, PROCESSES AND PROJECT PREVIEW.

1.1.1 PHOTOGRAMMETRY

There is no universally accepted definition of photogrammetry. The definition given below captures the most important notion of photogrammetry.

     “ Photogrammetry is the science of obtaining reliable information about the properties of surfaces and objects without physical contact with the objects, and of measuring and interpreting this information.”

    The name “photogrammetry" is derived from the three Greek words phos or phot which means light, gramma which means letter or something drawn, and metrein, the noun of measure.

     In order to simplify understanding an abstract definition and to get a quick grasp at the complex field of photogrammetry, we adopt a systems approach. Fig.1.1  illustrates the idea. In the first place, photogrammetry is considered a black box.

 

   Figure 1.1: Photogrammetry portrayed as systems approach. the output includes photogrammetric products.

 The input is characterized by obtaining reliable information through processes of recording patterns of electromagnetic radiant energy, predominantly in the form of photographic images. The output, on the other hand, comprises photogrammetric products generated within the black box whose functioning we will unravel during this course.

1.1.2 Data Acquisition

Data acquisition in photogrammetry is concerned with obtaining reliable information about the properties of surfaces and objects. 

    This is accomplished without physical contact with the objects which is the most obvious difference to surveying.

The remotely received information can be grouped into four categories

1. geometric information involves the spatial position and the shape of objects. It is the most essential information source in photogrammetry.

2.physical information refers to properties of electromagnetic radiation, e.g., radiant energy, wavelength, and polarization.

3.semantic information is related to the meaning of an image. It is usually obtained by interpreting the recorded data.

4.temporal information is related to the change of an object in time, usually obtained by comparing several images which were recorded at different times.


1.1.3 Photogrammetric Products

The photogrammetric products are derivatives of single photographs or composites of overlapping photographs. Fig.1.2 depicts the typical case of photographs taken by a camera. During the time of exposure, a latent image is formed which is developed to a negative. At the same time diapositives and paper prints are produced. Enlargements may be quite useful for preliminary design or planning studies. 

                                          3d_photographic_product

                                                  Fig.1.2 photographic product.

1.1.4 Computational Results

Aerial triangulation is a successful application of photogrammetry. It delivers 3-D positions of points, measured on photographs, in a ground control coordinate system, e.g., state plane coordinate system. Profiles and cross sections are typical products for highway design where earthwork quantities are computed. Inventory calculations of coal piles or mineral deposits are other examples which may require profile and cross section data.

     A recent addition to photogrammetric instruments is the softcopy workstation. It is the first tangible product of digital photogrammetry. Consequently, it deals with digital imagery rather than photographs.


1.2 Motivation

A common scenario where a concept such as this can come in handy is if someone, either a student at school or a hobbyist in a small workshop, who whish created a small part in 3D model that has to be realized in physical form. A 3D printer is available for such purpose, but preferably the part needs to be in a more robust material, like metal, and those kinds of printers are way out of reach. The student might also lack the required knowledge to set up and operate the CNC milling machine, while the employees that possess that knowledge do not have time or are otherwise engaged. A solution where all they need to do is upload a file containing the part, insert some material and press a button could really save them both time and frustration. The concept can help other professions in realising work with better outcome by integrating photogrammetry.


1.3 PROBLEM DESCRIPTION

There are a lot of problems that arises in traditional replicating and manufacturing processes, which can be solved easily with the use of Photogrammetry integrated system. The problems that are solved are as follows

     Replicating handmade products is a difficult task, because it requires dimensions of that product to use machining operation on it. It becomes even more difficult to get the dimension of a product with lots of curves.

    If we use casting, the surface finish of the casting is poor and it is limited to a degree of complexity. 

      If we go for injection Molding, again we did the dimension of that product. If we use the 3D scanner, we either get poor model of the product or we get good model with extremely high cost.

     These and many more difficulties can be solved using photogrammetry to produce the 3D model with easy and it is cost efficient.


1.4 OBJECTIVES OF WORK

  • There are a lot of problems that arises in traditional replicating and manufacturing processes, which can be solved easily with the use of Photogrammetry integrated system. The problems that are solved are as follows

  • Replicating handmade products is a difficult task, because it requires dimensions of that product to use machining operation on it. It becomes even more difficult to get the dimension of a product with lots of curves.

  • If we use casting, the surface finish of the casting is poor and it is limited to a degree of complexity. 

  • If we go for injection Molding, again we did the dimension of that product. If we use the 3D scanner, we either get poor model of the product or we get good model with extremely high cost.

  • These and many more difficulties can be solved using photogrammetry to produce the 3D model with easy and it is cost efficient.


1.5 ANALYSIED PROCESS LAYOUT

                                  link to know work flow

                                    Fig.1.3 Organized work process flow chart




2. LITERATURE SURVEY

2.PHOTOGRAMMETRY OVERVIEW 

Today, 3D models are used in a wide variety of fields. The medical industry uses detailed models of organs. The movie industry uses them as characters and objects for animated and real-life motion pictures. The architecture industry uses them to demonstrate proposed buildings and landscapes. The engineering community uses them as designs of new devices, vehicles, and structures as well as a host of other uses. The use of three-dimensional computer graphics and visualization techniques is becoming increasingly popular because these techniques visualize more realistic object models than graphic based object models. However, in most application of 3D modelling and visualization, large and complex 3D model's data are required. The basic data source of three-dimensional (3D) modeling of regular or irregular surfaced objects are known (or calculated) point coordinates. Obtaining of 3D model of the irregular surfaced objects need plenty points to represent the surface exactly. These points can be easily obtained both traditional methods and from the measurement of the photographs. In this study, the case and ability of the close-range photogrammetry have been investigated for 3D modeling. Therefore, an irregular surfaced artificial object was used to appreciate of capability of photogrammetric methods.

      In 3D computer graphics, a 3D model is a mathematical representation of a three-dimensional object. It can be displayed as a two-dimensional image through a process called e-3D rendering or used in a computer simulation of physical phenomena.

     3D models are most often created with special software applications called 3D modelers. Being a collection of data (points and other information), 3D models can be created by hand or algorithmically (procedural modeling). Though they most often exist virtually (on a computer or a file on disc), even a description of such a model on paper can be considered a 3D model. Close range photogrammetry offers the possibility of obtaining the three-dimensional (3D) coordinates of an object from two-dimensional (2D) digital images in a rapid, accurate, reliable, flexible, and economical way. This makes it an ideal tool for precise industrial measurement.

    Digital close-range photogrammetry is a technique for accurately measuring objects directly from photographs or digital images captured with a camera at close range. Multiple, overlapping images taken from different perspectives, produces measurements that can be used to create accurate 3D models of objects. Knowing the position of camera is not necessary because the geometry of the object is established directly from the images.

2.1 DIGITAL CLOSE-RANGE PHOTOGRAMMETRY

Photogrammetry is a technique to know the position, size and shape of an object obtained from some photographs instead by direct measure. The term close range photogrammetry is used to describe the technique when the size of the object to be measured is less than about 100 m and the camera is positioned close to it. Images are obtained from camera positions all around the object. The camera axes of each shot are parallel only in special cases, but usually they are highly convergent. In photogrammetry the position of a point in the space is commonly defined by a 3D cartesian co-ordinate system. The origin, scale, and orientation of which can be arbitrarily defined. It is often necessary to transform between co-ordinates in systems having different origin, orientation, and scale. Coordinate transformations may be divided into three parts: rotation, translation, and scale change. The rotation matrix Rκϕω can be expressed by Eqn (1).

Mathematical approach

  (1)

     where–– ω, ϕ, κ ––are the sequential rotations transforming the primary axes–– x’, y’, z’ ––into the secondary axes––x,y,z––as shown in fig (2.1). 

If the origin of primary axes is translated by Tx, Ty, Tz, and the scale is multiplied by K, seven parameters define the whole coordinate transformation: Tx, Ty, Tz, ω, ϕ, κ, K. The central perspective projection, Figure (2.2), is the starting point for a model in close range photogrammetry.

     Figure 2.2 shows the elements, the perspective relationship and the collinearity of A, O, and a. The collinearity equations show as an image point a ( xa,ya,-f). The perspective center O (X0, Y0, Z0) y the object points A(Xa,Ya,Za), are onto the straight line Aoa. The collinearity Eq. (2) allow to estimate the image co-ordinates (xa,ya) from the object co-ordinates (Xa, Ya, Za).

                       

    Where rij are the elements of the matrix of rotation R in which the rotation angles to be determined appear.

                       -2.2

      Figure (2.2), Central perspective of A show three-dimensional object co-ordinates and homologous point a in the projection plane. It shows main point P, perspective center O, focal length f and the collinearity of A, O, and a.

     When several cameras placed around an object are used, it is called multistation convergent. If non-metric cameras are used, more additional parameters could be included. These equations can be written (3).

                                  

     where x is a vector representing the parameters to be estimated; b is a vector representing the measured elements; a is a vector representing those elements whose values are known constants. Eq. (3) is a functional model of the photogrammetry based on collinearity equations.

     The vector b includes all photo-co-ordinates and any measurements made on or around the object. The coordinates of the targets will generally be included in x. If the cameras have undergone prior calibration, and these calibrated values are accurate, then the calibrated values could be included in a, if no prior calibration values have been obtained, it is possible to include calibration elements in x only (this procedure is referred to as self-calibration). The elements of exterior orientation of the cameras may have been evaluated by a prior process so if it is reasonable to assume they have remained unchanged, the values can be included in the next process as either constants (in a) or as measurements (in b and in x).

     In close range photogrammetry, where more measurements are available than the least necessary to evaluate the unknown elements, it is possible to justify the use of least squares estimation (LSE) solely on the basis of statistical probabilities. LSE provides a systematic method for computing unique values of coordinates and other elements in close range photogrammetry based on many redundant measurements of different kinds and weights. It allows for covariance matrices of estimates to be readily derived from the covariance matrix of the measurements. If a covariance matrix of the measurements is assumed, a priori analysis can be used to design a camera/object configuration and measurement scheme to meet criteria relating to precision, reliability, and accuracy. This attribute of LSE is particularly useful in close range photogrammetry where almost every measurement task has unique features.

    LSE is also flexible, it allows elements to be treated as unknowns, or as measurements, or as constants depending on circumstances. The main disadvantage of LSE is that it does not make it easy for the user to identify blunders in measurements.

      Compared to classical surveying methods, digital close-range photogrammetry is efficient and rapid, significantly reducing the time required to collect data in the field. Measurements collected in less than three days in the field would have taken 10 days in a conventional survey. Second, it was considerably safer. All surveyors were able to obtain precise measurements without physically accessing each measurement point. Third, the method was non-intrusive, creating minimal impact on traffic flow. Finally, the process produced a comprehensive visual record of existing site conditions from which any identifiable features can be measured or geometrically assessed at a later date. Digital close range photogrammetric methods have been successfully applied to projects in archaeology, architecture, automotive and aerospace engineering, and accident reconstruction Digital photogrammetric systems allow the use of conventional digital cameras and consequently lower costs.

3D Digitizing and Modeling Techniques

Techniques for 3D digitizing and modeling have been rapidly advancing over the past few years. The ability to capture details and the degree of automation vary widely from one approach to another. One can safely say that there is no single approach that works for all types of environments and at the same time is fully automated and satisfies the requirements of every application.

     The process of creating 3D models from real scenes has a few well-known steps [figure 1]: data collection, data registration, and modeling (geometry, texture, and lighting). There are many variations within each step, some of which are listed in figure (2.3)

       Figure 2.3: The main steps for creating 3D models from real scenes.

     Passive image-based methods, mostly based on photogrammetry, have been developed for specific applications such as architecture [Debevec et al, 1996]. Those needing user interaction have matured to a level where commercial software is now available (e.g., Photomodeler  and ShapeCapture ). For relatively simple objects, structures, or environments, most existing methods will work at varying degrees of automation, level of details, effort, cost, and accuracy. Many researchers have presented examples of those types of models in the past 5 years. However, when it comes to complex environments the only proven methods so far are those using positioning devices, CAD or existing models and operator in the loop.

2.2 PHOTOGRAMMETRIC PROCESSING

 Office work is a subsequent stage to field work. Office work serves the purpose of processing previously gathered information. It is carried out in two stages: one involving the storage of data contained on the index cards and the other involving data processing. In the first stage a database must be created in order to speed up the management of information. In the second stage, 3D models reproducing the original structure on a fixed scale are obtained by means of a digital photogrammetric station. into the following stages

    1.Inner orientation. This operation entails the reconstruction of perspective rays in conditions similar to their formation within the photographic camera, using the values obtained in the calibration process (radial and decentering lens distortion, focal length, and position of the principal point). By means of inner orientation we can get rid of errors arising from the use of nonmetric cameras. The camera calibration has been done using the Camera Calibrator 4.0 software included in the Photomodeler Pro 5.0 digital photogrammetric station. The method used by this software is the self-calibrating bundle adjustment, which requires taking some previous shots of a calibration grid in order to obtain the inner orientation parameters of the camera.

   2. Exterior orientation. In this stage the rays generated in the inner orientation process are positioned in relation to the ground in the very same position adopted at the moment of exposure of the photographs. The photogrammetric coordinates for a minimum of five points shared by another photograph already orientated or to be orientated in the same process must be measured. As ground control points are not available, the system will carry out a free network adjustment. This includes two operations arran absolute orientation.

  3.  Relative orientation. The evaluation of the exterior orientation elements of one camera with regard to the photo coordinate system of another is known as relative orientation. The simultaneous intersection of at least five pairs of homologue rays distributed through the model is enough for the remaining points to intersect as well, according to perspective geometry. The rays’ equations are calculated analytically and the relative orientation parameters can be calculated applying the coplanarity conditions to the homologue rays (vectors defining the projection of every ground point on the photographs) for each pair of photos. Adopting the coordinate system based on the first Picture (origin in the center of projection, X- and Y-axes on a plane parallel to the plate, and the Z-axis in the direction of the principal axis), the relative orientation solves the problem of calculating the relation between the photogrammetric coordinate system and the model coordinate system by means of the coplanarity conditions. Shown in fig (2.4).

                              

    Figure 2.4, Reference systems of two photographs in order to carry out relative orientation.

    4.Absolute orientation. Once the model has been established, we must adjust it to the ground coordinate system by means of absolute orientation. The z-axis is established from the direction defined by the plumb lines and this is how the model is leveled. The scale factor is attained via the distances measured on the plumb lines by means of a simultaneous bundle adjustment. Then we proceed to a new adjustment which ensures correct leveling, orientation and scaling of the 3D model. The homothetic relation (scale factor) between the model obtained in the relative orientation and the ground truth is calculated by measuring the real distances between the points marked in the plumb line and comparing them with the distances in the model.

-4

   Where (X0,Y0,Z0) are the camera principal point ground coordinates, k the scale factor previously calculated and R a rotation matrix with Ω and K rotation angles obtained in the model leveling stage.

 Leveling is achieved applying rotations around X- and Z-axes (that means that ϕ = 0 in the rotation matrix), which is the result of testing the model coordinates for the points marked in the plumb lines (the fact that these are vertical enable to establish that after the transformation they have the same X and Z coordinates). The transformation equations for the photogrammetric systems (x,y,z) to the ground coordinate system (X, Y, Z) are similar to the ones used in relative orientation.

                                          

    Figure 2.5: Absolute orientation entails the computation of the transformation parameters between model and ground coordinate system.

   5.Restitution of the models (after scaling and orientation). The information contained in the photographs will be materialized in a document, irrespective of whether such photographs are plans and digital files (3D and/or 2D), coordinates list with information on the mistakes observed, etc. Several modalities of restitution are available: points, lines, polylines (or other graphic entities of interest, like cylinders, circumferences, etc.)

     The resulting models containing metric information and the 3D models are ready to be exported in conventional formats (dxf, dxb, vrml, etc.) into other programs in order to be visualized, edited, or processed. Another feature of the system is its capability to generate 3D models of surfaces and the subsequent projection of real textures, captured from the object’s photographs, onto these models.

2.3 3D MODELING SETUP AND PROCEDURE

The photogrammetry techniques can be used in the manufacturing process in-line or off-line. They can be applied in the process of development of a product (research and development). Some companies use photogrammetry methods to generate a high accurate reference coordinate system especially for the measurement of very large objects.            The output of a photogrammetric process can be: 3D points coordinate, topographical maps, rectified photographs (orthophoto).

representation_of_photogramatory_process

Fig 2.6 Representation of a photogrammetry process

structure_from_motions

  2.3.1System and software

Passive image-based methods (e.g., photogrammetry or computer vision) acquire 3D measurements from single images, they use projective geometry and they are very portable [Remondino 2003]. The system used is based on a digital camera (4 Megapixel resolution), a Photogrammetry-based software (Photomodeler), and a post-processing CAD software (Raindrop Geomagic Studio). Particular attention is to be used setting the process. The camera is to be placed on a tripod. The environment must be fairly light. The quality of the images utilized for the reconstruction is a basic requirement to get the best results with the photogrammetry technique. The better the photo the better the results. So, high resolution pictures are needed (min 1200x1900 pixels). Photogrammetry works well if some tricks are adopted such as white background, coded target on the object, pictures taken at every 30-40 degrees.


2.3.2Camera calibration

It is very important for the reliability of the whole project [Tangelder 2003] that the lens focal length and the lens distortion, can be obtained by calibrating the camera by using the calibration pattern provided with Photo modeler.

                                             Fig 2.7 Calibration pattern

      The calibration process works if six or more pictures are taken from different angles of a dense point grid. The Camera Calibrator needs the distance between control points 1 and 4 on the projected or printed pattern (scaling phase). It is important to highlight that the pattern should fill as much of the photograph as possible [Photo modeler 2002]. The user should be careful to include in each picture all the control points.

2.3.3Accuracy

The photogrammetry approach is commonly considered not very precise. With CMM systems it is possible to obtain measure points with accuracy near µm. Instead, by using a Photogrammetry-based approach it is quite normal to get accuracy lower than 1mm, but special tricks might be adopted to acquire points with higher accuracy. So, a close-range technique may be a suitable solution for a model measurement and/or reconstruction, if a very high precision isn’t required. Photo modeler offers the possibility to get accuracy of the object size till 1:2000 or higher so that for an object with a 10m longest dimension, and can produce 3D coordinates with 5mm accuracy at 95%. If other factors are taken care of good geometry, good camera calibration, it is possible to achieve 1:25.000 or higher accuracy in a project that is all or substantially all done with particular tricks.

2.3.4.Process and methods of photographing

The most essential element of the photogrammetric process is to get as much coverage of the object as possible. It is not enough to get the whole object on a minimum number of photographs a lot of attention must be put into getting a lot of overlap between photographs as in fig (2.8). The algorithm that reconstructs the form of the physical object compares photographs of the object and puts them into pairs, so a bigger overlap allows the algorithm to compute a larger number of shared points from which it creates a model later on. A larger overlap also means that every point on the object is visible from multiple photographs, so if one point is visible on at least three photographs, that point can be precisely triangulated in three-dimensional space. Such photographs were taken for this research as well.

                                   structure_from_motions

                               Fig 2.7 Structure-from-Motion (SFM).


      Alongside having the resolution impact, the model’s point density, the quality of the texture is also a result of the same photographs.

      Certainly, the number of pixels on a photograph is not necessarily a measure of a photograph’s quality. It is better to have pixels with good focus, i.e., a quality lens and sensor of the camera that create the least amount of noise. It is better to have less pixels of high quality than many low-quality pixels. Many low-quality pixels implicates longer processing time to recreate a 3D model that will, eventually, have worse surface quality and a bad texture.

    Besides the coverage from photographs and their resolution, lighting of the object is also of immense importance. As the complexity of colours makes it easier to find shared points, so do the shadows on the object help with reconstructing the form of the object. Having too many shadows is bad, because they have a negative influence on the reconstruction of both form and texture.

     In controlled conditions there are ways to completely remove shadows while also having a good reconstruction of the 3D model and texture. That is achieved by a combination of well-placed lights, a ring flash and using a polarization filter which filters reflections from the object. In outdoor conditions it is optimal to take photographs in the middle of a cloudy summer day, because there is a lot of ambient light colours and details are showing up on the object without having shadows.

      For a simple reconstruction process, it is preferable to have the whole object on all of the photographs, although this is not necessary. In this way the algorithm easily recognizes all the locations of points on the object, and can use the silhouette of the object to further simplify the process.

     The best overlap of photographs is when they have a radial offset of 10 to 30 degrees, while the maximum offset is 5 to 45 degrees. When the offset is larger than 30 degrees, there is not enough overlap and the 3D model will be reconstructed with holes (missing parts) or it might not be reconstructed at all [15]. With offset of less than 10 degrees, there is a lot of redundancy between photographs which greatly increases the required time for reconstruction without actually increasing the quality of the 3D model.

     The position of the camera is easy to control if there is always an equal distance from the object and the camera is always at the same height level. After one circle around the object, the height at which the camera is held changes and another circle of photographs is made around the object.

It is important to tilt the camera towards the previous circle of photographs so they would all be radially offset vertically as well. If the circular sets of photographs are always facing the horizon (cylindrical camera positions), it is possible to have bad overlap. With photographs that have both horizontal and vertical offset, the overlap will be the best, and so the 3D model will be of a lot higher quality.

     In summary, this research satisfied all of the conditions: 

  • the photographs had good focus; - noise present in the photographs was in low amounts due to favorable lighting;

  • there was no discrepancy of the surface color of the physical object due to absence of cast shadow;

  • the object had low amounts of cast shadows which improved the texture quality;

  • more than sufficient overlap between photographs was present;

  • camera positioning was hemispherical in regards to the center of the object.


2.3.5.Analysis of the object selection process

When using photogrammetry, all the photographs which meet the following criteria should be discarded:

  • photographs which are out of focus: photographs should have the same depth of field;

  • photographs with too much exposition: this “burns” the photograph and there is a loss of colour information;

  • photographs which do not receive enough light: it is hard to differentiate what is shadow and what is the object;

  • photographs that are redundant: photographs made from the exact angle, as that increases processing time without improving the quality of the 3D model.

   Since this research used a relatively simple subject and had an experienced operator, there were no discarded photographs.

2.3.6.Adjustment of photographs

As far as the photograph adjustment process, photographs can sometimes have smaller imperfections that can be removed in any of the photo processing applications – lesser blur, noise that appears in low-light conditions, unbalanced exposition, rotation of the camera etc. But, in this research, the only thing that needed to be adjusted was the exposition difference between photographs, which was expected due to equipment used and the lighting in which the object was photographed.

     The exposition was normalized across photographs in such a manner that overexposed photographs were slightly darkened, and underexposed photographs were slightly brightened, which, in the end, turned out to be beneficial to the quality of the texture of the 3D model.

2.3.7.Analysis of the 3D model creation process

When it comes to the 3D model reconstruction, the whole data set consisting of 199 photographs was loaded into the selected specialized photogrammetry software application for 3D modelling. Surface of the 3D model had a quality that can be used for different purposes, especially for online display which was one of the main purposes of the project. The quality of the texture was higher than needed but, since the images the texture was made from do not take up a lot of space for storage, they will be kept for future reference.

    Qualitative analysis has shown that the described approach can provide data of higher quality than needed even when using non-specialized equipment for data acquisition and processing. A model was reconstructed in different resolutions of the 3D model, and in different resolutions of the texture.

    All the variations of the same model are shown in Table 1. The best result was achieved when all the photographs from this experiment were used and the 3D model was reconstructed in the highest quality with highest quality texture (size increase of ca. 242%), but in regards to the needs of the research versus processing time for the highest quality, that was not necessary.


                        Table 1. Variations of the reconstructed 3D model.

    

    The best ratio of reconstruction time and quality is when the complete set of photographs is used and the 3D model is reconstructed in medium quality with a texture in medium quality (size increase of ca. 14%). Such model requires less processing time and loads faster so it is easier to browse and edit collections of multiple models. They can also be uploaded faster to online services for displaying 3D models. For the same reason, they load faster on slower devices, such as mobile phones and tablets.

      

2.4 FUSED DEPOSITION MODELLING / 3D PRINTING

A prime example of rapid prototyping is 3D printing. There are numerous different kinds of 3D printers, ranging from multi million commercial ones, to simple hobby versions. Common for many printers is that the prototype is manufactured one layer at a time. It can be compared to regular 2D printing used to print on paper, only with one extra dimension, hence the name 3D printing. As mentioned, there are many different printers, resulting in many different printing techniques. Many of them are so-called additive manufacturing processes. Some of these are: FDM (Fused Deposition Modelling), PBP (Powder Bed Printing) and SLS (Selective Laser Sintering).

     Fused deposition Modelling (FDM), or Fused Filament Fabrication (FFF), is an additive manufacturing process that belongs to the material extrusion family. In FDM, an object is built by selectively depositing melted material in a pre-determined path layer-by-layer. The materials used are thermoplastic polymers and come in a filament form.

     FDM can easily be understood as drawing with a very precise hot glue gun. The process begins with G-code generating software that determines how the extruder, the part depositing material, will draw out each layer to build up the model as figure 2.8 shows.

                                              

          Figure 2.8: Pattern employed by FD processing to build a layer of a "C" ring.

     FDM is the most widely used 3D Printing technology: it represents the largest installed base of 3D printers globally and is often the first technology people are exposed to.

      A designer should keep in mind the capabilities and limitations of the technology when fabricating a part with FDM, as this will help him achieve the best result.

The actual printing process works by using a motor to feed a filament with material through a heating element that melts it at a temperature that typically ranges between 170 and 300 degrees Celsius, depending on the type of material being used. The filament emerges molten and quickly hardens to bond with the layer below it. The nozzle and/or the build platform moves in the X-Y (horizontal) plane before moving in the Z-axis (vertically) once each layer is complete. In this way, the model is built one layer at a time from the bottom upwards. Some prototypes have high complexity and require some extent of support in order to avoid extruding material into thin air. Because of this, FDM printers may utilize two nozzles. One nozzle produces the material used for the model itself, while the other produces support material. There are printers with more than two nozzles to get alternative colours. If the object was printed using support material, after the printing process is complete, the support is snapped off or dissolved in solvent leaving behind the finished model.

      FDM uses high quality industrial grade plastics such as ABS, polycarbonate (PC) etc. to produce strong, robust parts suitable for functional parts. While there are many advantages, there are some disadvantages. If support material has been used, it can be a hassle to get rid of it all. Also, FDM does not have the best surface finish. Since it lays down layers like a glue gun, it is possible to see the lines of each layer quite easily. If surface finish is important, PBP and SLS can be an alternative.


2.4.1 PBP and SLS

Powder bed printing (PBP) and selective laser sintering (SLS) are two very similar printing techniques. Like FDM, the prototype to be printed is built up from thin layers of a 3D model. The printer itself consists of two adjacent tanks of powder, where one of them is the build tank, and the other the powder reservoir. In PBP, an inkjet print head moves across the top of the powder in the build tank, selectively depositing a liquid binding material. When the material is bound, the build tank is lowered and a fresh layer of powder is spread across the top, and the process is repeated. SLS uses laser instead of binding material, and bind the powder by local melting. When the model is complete, the unbound powder (support) is easily removed.

Parts made with PBP have high local accuracy and surface finish, but models may warp when post-processed. The parts are therefore not suitable for use as functional parts. SLS however, can make durable functioning parts with high surface finish depending on the powder used. But even though these parts have a high surface finish, it is nothing compared to what can be achieved with CNC machining.

2.4.2Common FDM Materials:

One of the key strengths of FDM is the wide range of available materials. These can range from commodity thermoplastics (such as PLA and ABS) to engineering materials (Such as PA, TPU, and PETG) and high-performance thermoplastics (such as PEEK and PEI). As shown in fig 2.9.

                   

                                                      Fig 2.9: material pyramid


      Thermoplastic materials pyramid available in FDM. As a rule of thumb, the higher a

 material is the better its mechanical properties.

     The material used will affect the mechanical properties and accuracy of the printed part, but also its price. The most common FDM materials are summarized in the table below. A review of the main difference PLA and ABS, the two most common FDM materials, and an extensive comparison of all common FDM materials can be found in the dedicated articles.

 

                                      Table 2: FDM - materials and characteristics.

Material

Characteristics

ABS

Good strength

Good temperature resistance

More susceptible to warping

PLA

Good strength

Good temperature resistance

More susceptible to warping

Nylon (PA)

High strength

Excellent wear and chemical resistance

Low humidity resistance

PETG

Food Safe

Good strength

Easy to print with

TPU

Very flexible

Difficult to print accurately

PEI

Excellent strength to weight

Excellent fire and chemical resistance

High Cost


2.4.3 Characteristics of FDM:

The main characteristics of FDM are summarized in the table below:

                                  Table 3: properties

Materials

Thermoplastics (PLA, ABS, PETG, PC, PEI (etc.)

Dimensional accuracy

± 0.5% (lower limit ± 0.5 mm) desktop± 0.15% (lower limit ± 0.2 mm) - industrial

Typical build size

200 x 200 x 200 mm - desktop1000 x 1000 x 1000 mm - industrial

Common layer height

50 to 400 microns

Support

Not always required (dissolvable available)


2.4.4 Benefits & Limitations of FDM:

The key advantages of the technology are summarized below:

  • FDM is the most cost-effective way of producing custom thermoplastic parts and Prototypes.

  • The lead times of FDM are short (as fast as next-day-delivery), due to the high availability of the technology.

  • A wide range of thermoplastic materials is available, suitable for both prototyping and some non-commercial functional applications.

2.4.5 The key disadvantages of the technology are summarized below:

  • FDM has the lowest dimensional accuracy and resolution compared to other 3D printing technologies, so it is not suitable for parts with intricate details.

  • FDM parts are likely to have visible layer lines, so post processing is required for a smooth finish.

  • The layer adhesion mechanism makes FDM parts inherently anisotropic.


2.5 CNC milling

While a 3D printer creates a prototype by adding or binding material, a CNC (Computer Numerical Controlled) milling machine does just the opposite. Starting with a solid block, it removes material to create the prototype. As with 3D printers, there is a whole variety of milling machines. There are the advanced high-end ones that can make complicated and extremely detailed models out of blocks of titanium. There are also the regular basic custom-built milling machines, and of course many in between. CNC milling machines have been developed based on conventional milling machine, where the tool is moved by operating a hand wheel for each axis. The basis of adding numerical control is simple: replace the hand wheels with motors and some electronics to control the position of the tool.

                                   a_typical_milling_machine      

                                Figure 2.10: A typical milling machine

Figure 2.10 shows a typical 3-axis milling machine, it can be very similar to an FDM printer in how it is put together. It has a platform where the solid block of material sits. This can typically be moved in the x and y direction. Above the block sits the milling tool that moves up and down in the Z direction to complete the 3-axis system. Removing material requires quite a bit of force, making the work piece stay put on the platform while milling can be challenging.

Milling strategies 

When CNC milling it is important to optimize one or more of the following parameters:

  •  Highest possible volume removal rate in the smallest amount of time

  •  Surface finish

  •  Tool life

  •  Heat

  •  Minimal interaction with the machine 

  •  Safety considerations

All of the above factors are directly affected by spindle speed, feed rate and milling strategy. Spindle speed is the speed the end mill is rotating with in RPM (Revolutions Per Minute) while cutting speed is the speed of which the end mill is travelling over the surface when milling. Feed rate is the rate at which the tool will advance into the material. The feed rate that can be used, is determined by the spindle speed, the number of cutting edges(flutes) on the tool, and by the chip load. The chip load is the average thickness of the chips that are cut off the work piece.

Normally a milling job is divided into two phases, roughing and finishing. Roughing is all about maximizing volume removal at the expense of surface quality. High feed rates and step-over is used. The finishing process on the other hand, uses a low step-over, typically only 0.1-0.3 mm material is removed for each pass. Finishing operations may therefore be very time consuming.

When removing material there are normally many different milling motion strategies to choose from, and knowledge about things like spindle speeds, feed rates and step-over is required. A good CAM (Computer Aided Manufacturing) program will normally offer optimized motion strategies for different milling operations.


2.5.1 Implementing rapid prototyping using CNC machining (CNC-RP) through a CAD/CAM interface

CNC-RP or Computer numeric Controlled Rapid Prototyping through CAD/CAM is a method which enables automatic generation of process plans for a component that is to be machined(milled). By using advanced geometric algorithms, true automatic NC code generation is achieved directly from CAD models with no human interaction, a capability necessary for a practical rapid prototyping system.

Most RP systems are based on the additive layer stacking process, and attempts to automate CNC machining have also been approached from the perspective of traditional machining methods, but it is necessary to re-think how parts can be held, oriented, and cut.

The idea is to place the stock material between two opposing chucks. The material can then be rotated by a rotary indexer. For each orientation, all visible surfaces are milled by an end mill which can move in the x, y, and z direction. A set of sacrificial supports will keep the part connected to the uncut ends of the stock material while milling. When all operations are complete, the supports are milled or sawed off, leaving the part free to be removed. Figure 2.11 shows the process.


                                            rapid machining

Figure 2.11: Rapid machining; (a) set up, (b) sections machining approach, (c) Part Section machining steps, (d) Support Section machining steps, and (e) Support removal steps

     A visibility algorithm sorts surfaces to be milled by allocating each surface to the set-up angle that can reach the surface with minimum distance. This is calculated using depth calculations from the slice geometry of the part file. Tool paths generated for both the roughing and finishing process are generated from the original CAD model of the part, therefore the tool paths in CNC-RP are not based on an STL file, rather, from the native surface geometry. This way, CNC-RP avoids approximation errors that exist in additive processes (3D printing) which calculate tool paths from STL models. The steps so far result in a concentrated set of NC-code and a set-up sheet. The set-up sheet lists the tools required, tool changer locations, and diameter and length of the stock material. To run the program, the user loads the material and NC code and press the start button, which initiates a cycle-start. However, as can be seen in the flowchart in Figure 2.12, some user inputs are required to obtain the NC-code and a set-up sheet.

                                     system_flow_chart

Figure 2.12: System Flowchart illustrating interaction between CAD models in the CAM system and algorithms, through the STL file format.


MeshCAM

MeshCAM is a 3D CAM program that translates 3D files into something that The Nomad can use. MeshCAM has been under continuous development for 10 years. The MeshCAM process consists of three steps:

  •   Load a file from almost any CAD program 

  •   Build an efficient toolpath with minimal input 

  •   Save G-code that works on the CNC machine

MeshCAM works with almost every 3D CAD program by opening the two most common 3D file formats, STL and DXF. A CAD program is not required, MeshCAM can open any image file (JPG, BMP, or PNG) and MeshCAM converts it to a 3D surface that can be machined directly. MeshCAM has an Automatic Tool path Wizard to help create tool paths. It picks values like feed rates, speeds, depths of cut, etc. to reduce the learning curve associated with CNC machining. The user picks the cutters and tells MeshCAM the desired quality level. MeshCAM will then analyze the model to pick values to get you started. The values can be tweaked to be made better or used as-is. Another feature MeshCAM has, is that it can add sacrificial supports to the part, to keep it connected to the stock material while milling if the part is too complicated for use of vices or clamps. MeshCAM has a built-in post processor to transform the G-code to work with various CNC machines. It supports lots of machine types as-is and it can be extended to support most others.

3.METHEDOLOGY

3.1 WORKFLOW

      We start by capturing every surface of the object with multiple images from different angles. The photos are captured with overlapping images. The capture stage is often the bottleneck phase in the 3D asset creation process. But, when you’ve mastered the art of automating capture under controlled conditions, it can significantly improve the data creation (image capturing) phase. The aim is to get a full 360 view of the object without including peripheral items. So, features that are behind or around the object which do not relate to the item being captured cannot be included.

Merge the images Following the capture of the images, the mapping software then seamlessly merges all the images to create a single perfect multi- angle image of the object. It uses the overlap and significant markups in the images to align them appropriately.

Export and publish a 3D model. Finally, once we’ve merged the images, the 3D model is ready to be exported.

3.2 The PhotoScan Workflow

     PhotoScan makes the order of operations easy to follow via its Workflow menu. Basic operations can be accomplished by stepping through the menu and performing each of the following tasks in turn.

  •  Add Photos (or Add Folder containing all photos from your shoot) - this first step loads all of your raw images into the software’s interface.

  •  Align Photos - the first processing step compares the pixels in your photos to find matches and estimate camera locations and 3D geometry from them

  •  Build Dense Cloud - once satisfied with the alignment, the sparse point cloud (a mere fraction of the total data) is processed into a dense cloud in which each matchable pixel will get its own X, Y, Z location in 3D space

  •  Build Mesh - this step connects each set of three adjacent points into a triangular face, which combine seamlessly to produce a continuous mesh over the surface of your model

  •  Build Texture - In the final step, the original images are combined into a texture map and wrapped around the mesh, resulting in a photo­realistic model of your original object.

3.3 Importing Photos

   Step one is getting our data (our images) into PhotoScan.

1. download a sample set of images here

2. Go to Workflow > Add Folder and navigate to the directory containing your images.

3. Select Create camera from each file and click OK. as shown in fig 3.1.

                                     indian_water_pot

                                            Fig. 3.1.importing photos in photoshop workflow

3.4 Aligning Photos

      It is time to tell the software to compare the photos and figure out how they overlap in 3D space. This is the magic ingredient of the photogrammetry process, without which nothing else would work.

  1.  Go to Workflow > Align Photos

  2.  Accept the default values and hit OK. we created masks in step 1 then went into the advanced options menu and enable the Constrain Features by Mask checkbox.

     The result of this process should be a sparse point cloud of our object that is spotty, but recognizable, surrounded by blue squares representing each camera position in 3D space.as shown in Fig 3.2. We have a 3D representation of our object! But now we must refine it.

                                                         indian_water_pot

                                                                   Fig.3.2 aligning photos in photoshop workflow

     Select the photos individually and its position will be highlighted in pink. Right clicking an image will give you several options for either disabling cameras (if they are too blurry or contain bad data), resetting alignment, or aligning photos that got missed the first time around.

3.5Editing the Sparse Cloud

   As we are satisfied with the alignment of all enabled cameras, it is a suitable time to get rid of any clearly bad data points. For editing, it can help to remove the cameras to see the object better.

1. Go to View > Show/Hide Items.

2. Click on Show Cameras to turn the blue square off. As shown in Fig 3.3.

                                                                                sparse cloud

                                                                   Fig.3.3 editing the sparce cloud

The editing tools can be found next to the mouse pointer image, and will select points to be deleted.

1. Using the indented-circle-shaped lasso tool to select points and                  highlight them pink.

2. Press your delete key to remove these points. As shown in fig.3.4.

          

                                                           Fig.3.4

3.6 Building Dense Cloud

This step will use the aligned photos to generate a point cloud that should be dense enough that it will look like a solid model from a distant zoom. As shown fig 3.5

1. Go to Workflow > Build dense cloud.

2. we set the Quality value as desired.

                                                                    dense_cloud

                                                                         Fig 3.5

3.7 Building Mesh

This cloud looks great from a distance, but if you zoom in we noticed that it is really a cloud of points, as the name suggests. We had to connect them into faces to make a continuous surface mesh. As in fig 3.6.

1. Go to Workflow > Build Mesh.

2. Making sure the Source Data is Dense Cloud.

3. Face count can be set as high as desired.

                                 

                                                                  Fig 3.6

3.8 Building Texture

For final result. As in fig 3.7

1. Go to Workflow > Build Texture.

2. You can try different blending mode settings, but Mosaic or Average should give         the best results, depending on the quality of your photos.


                       

                                                                    Fig 3.7


3.9 Properties / observation

In this research, as the most suitable 3D model, the one with the best ratio of reconstruction time and quality was chosen (Medium + 8k texture). It has the following specifications:

  • 3D model composed of 274.911 points (548.114 polygons);

  • chosen file format: .OBJ;

  • 3D model file size: 45,37 MB;

  • texture with a resolution of 8K (8.000 x 8.000 pixels);

  • single texture file in .JPG format;

  • texture file size: 8,47 MB.

For the model and texture file, the formats that were chosen are .OBJ and .JPG because they are:

  • industry standards;

  • most widespread in the 3D scanning community and for typical use;

  • formats for storage, editing, viewing, and displaying 3D models;

  • cross-platform;

  • for the stated reasons most suited for this project.


3.10 Exporting the Model

Your finished model can be exported in various formats for display or import into other 3D analysis suites, animation software or game engines. Common formats are

  •  3D PDF: an interactive format that is widely accessible, as the model can be viewed, manipulated, and even measured using the ubiquitous and free Adobe Reader.

  •  Wavefront (.obj) and Collada (.dae) are the most portable 3D mesh formats. If you are working with other animation platforms like Blender or game engines like Unity 3D, these formats can be easily added as assets to your project.

  •  Point cloud (LAS, .txt, etc.) these formats are probably the most future proof for long term storage, and also offer many options for secondary analysis in other tools like Geomagic and Mesh lab.



Comments

Popular posts from this blog

Coding for kids boon or bane

GVK